You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(1) |
2
(7) |
3
|
4
|
5
(16) |
6
(11) |
7
|
8
(1) |
9
(4) |
10
(10) |
11
|
12
(4) |
13
(4) |
14
(5) |
15
(5) |
16
(11) |
17
(3) |
18
(2) |
19
(5) |
20
(2) |
21
(5) |
22
(2) |
23
(2) |
24
|
25
|
26
(4) |
27
(8) |
28
(9) |
29
(9) |
30
(5) |
31
(1) |
From: Michiel de H. <mjl...@ya...> - 2009-01-19 14:33:25
|
I am also finding the continuing increase in memory usage, but this also occurs with other backends (I tried tkagg and pdf) and also without the call to savefig. One possibility is a circular reference in the quiver function that prevents data from being cleaned up. --Michiel --- On Mon, 1/19/09, Damon McDougall <dam...@gm...> wrote: > From: Damon McDougall <dam...@gm...> > Subject: [matplotlib-devel] Memory leak using savefig with MacOSX backend? > To: "matplotlib development list" <mat...@li...> > Date: Monday, January 19, 2009, 6:09 AM > I'm looping over several files (about 1000) to produce a > vector field plot for each data file I have. Doing this with > the MacOSX backend appears to chew memory. My guess as to > the source of the problem is the 'savefig' function > (or possibly the way the MacOSX backend handles the saving > of plots). > > I opened Activity Monitor to watch the usage of memory > increase. Below is code that recreates the problem. > > [start] > > import matplotlib > matplotlib.use('macosx') > matplotlib.rc('font',**{'family':'serif','serif':['Computer > Modern Roman']}) > matplotlib.rc('text', usetex=True) > from pylab import * > > i = 0 > x = [] > y = [] > v1 = [] > v2 = [] > > while(True): > f = open("%dresults.dat"%i,"r") > for line in f: > x.append(float(line.split()[0])) > y.append(float(line.split()[1])) > v1.append(float(line.split()[2])) > v2.append(float(line.split()[3])) > f.close() > hold(False) > figure(1) > quiver(x, y, v1, v2, color='b', > units='width', scale=1.0) > xlabel('$x$') > ylabel('$y$') > grid(True) > print i > savefig('graph-%05d.pdf'%i) > close(1) > x = [] > y = [] > v1 = [] > v2 = [] > i = i + 1 > > > [end] > > Regards, > --Damon > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword_______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Damon M. <dam...@gm...> - 2009-01-19 11:09:10
|
I'm looping over several files (about 1000) to produce a vector field plot for each data file I have. Doing this with the MacOSX backend appears to chew memory. My guess as to the source of the problem is the 'savefig' function (or possibly the way the MacOSX backend handles the saving of plots). I opened Activity Monitor to watch the usage of memory increase. Below is code that recreates the problem. [start] import matplotlib matplotlib.use('macosx') matplotlib.rc('font',**{'family':'serif','serif':['Computer Modern Roman']}) matplotlib.rc('text', usetex=True) from pylab import * i = 0 x = [] y = [] v1 = [] v2 = [] while(True): f = open("%dresults.dat"%i,"r") for line in f: x.append(float(line.split()[0])) y.append(float(line.split()[1])) v1.append(float(line.split()[2])) v2.append(float(line.split()[3])) f.close() hold(False) figure(1) quiver(x, y, v1, v2, color='b', units='width', scale=1.0) xlabel('$x$') ylabel('$y$') grid(True) print i savefig('graph-%05d.pdf'%i) close(1) x = [] y = [] v1 = [] v2 = [] i = i + 1 [end] Regards, --Damon |
From: Eric F. <ef...@ha...> - 2009-01-18 00:57:46
|
Michael Droettboom wrote: > You need the '-S' parameter to specify a branch. Otherwise, any > arguments after the command name are just paths within the working copy, > just like most other svn commands. > > So you need to do: > >> svnmerge.py merge -S v0_98_5_maint > > I just tested a change to the branch followed by a merge and everything > seems to be working fine here. > > Eric Firing wrote: >> John Hunter wrote: >> svnmerge init >> https://matplotlib.svn.sourceforge.net/svnroot/matplotlib/branches/v0_98_5_maint >> >> > That would have no effect without a following commit anyway... but > please don't do that if you're not sure what you're doing. That command > really should only ever be needed once. It's pretty hard to get into a > state where it would ever need to be done again for a particular branch. >> but it looks like I was still in the wrong directory when I did that, >> so I don't know if it had any useful effect. >> >> After getting into the right directory, some combination of svn up and >> svnmerge merge seemed to get everything straightened out, with a >> little editing to resolve conflicts along the way. >> > You can do merges from within subdirectories, and it works just like > most other svn commands when run from a subdirectory. Generally, > though, I like to catch all available merges and run from the root of > the source tree. >> That was last night, in the misty past. Now it looks like I am back >> with the original problem I started with last night, and which you >> also reported: >> >> efiring@manini:~/programs/py/mpl/mpl_trunk$ svnmerge avail >> /branches/v0_98_5_maint >> svnmerge: "/branches/v0_98_5_maint" is not a subversion working directory >> > Again, you're specifying a path that doesn't exist within the source > tree. There is no need to specify a path (generally) with the "svnmerge > avail" command. >> So, I'm baffled again. It is as if Jae-Joon's commit since mine of >> last night, and my corresponding "svn up" this morning, wiped out the >> svnmerge tracking info. >> >> I suspect a brief wave of Mike's magic wand tomorrow morning will >> clear away the fog. >> > I think this all comes down to missing the '-S'. I didn't need to get > out my wand for this one... ;) Well, looking back at the command history in the terminal window I was using, I was using the -S last night when things first started going haywire; but one way or another, or many ways, I was getting confused. Thanks for the clarifications and testing. Eric |
From: Michael D. <md...@st...> - 2009-01-18 00:19:05
|
You need the '-S' parameter to specify a branch. Otherwise, any arguments after the command name are just paths within the working copy, just like most other svn commands. So you need to do: > svnmerge.py merge -S v0_98_5_maint I just tested a change to the branch followed by a merge and everything seems to be working fine here. Eric Firing wrote: > John Hunter wrote: > > svnmerge init > https://matplotlib.svn.sourceforge.net/svnroot/matplotlib/branches/v0_98_5_maint > That would have no effect without a following commit anyway... but please don't do that if you're not sure what you're doing. That command really should only ever be needed once. It's pretty hard to get into a state where it would ever need to be done again for a particular branch. > but it looks like I was still in the wrong directory when I did that, so > I don't know if it had any useful effect. > > After getting into the right directory, some combination of svn up and > svnmerge merge seemed to get everything straightened out, with a little > editing to resolve conflicts along the way. > You can do merges from within subdirectories, and it works just like most other svn commands when run from a subdirectory. Generally, though, I like to catch all available merges and run from the root of the source tree. > That was last night, in the misty past. Now it looks like I am back > with the original problem I started with last night, and which you also > reported: > > efiring@manini:~/programs/py/mpl/mpl_trunk$ svnmerge avail > /branches/v0_98_5_maint > svnmerge: "/branches/v0_98_5_maint" is not a subversion working directory > Again, you're specifying a path that doesn't exist within the source tree. There is no need to specify a path (generally) with the "svnmerge avail" command. > So, I'm baffled again. It is as if Jae-Joon's commit since mine of last > night, and my corresponding "svn up" this morning, wiped out the > svnmerge tracking info. > > I suspect a brief wave of Mike's magic wand tomorrow morning will clear > away the fog. > I think this all comes down to missing the '-S'. I didn't need to get out my wand for this one... ;) Mike |
From: Eric F. <ef...@ha...> - 2009-01-17 17:49:46
|
John Hunter wrote: > After reading in a separate thread that Eric was having trouble with > svnmerge, I gave it a try and got > > jdhunter@uqbar:mpl> svnmerge.py merge v0_98_5_maint > svnmerge: "v0_98_5_maint" is not a subversion working directory > > Maybe our svn merge guru (MD) could take a look and see if anything > looks out of whack? John, I don't understand exactly what was going on, but I suspect there may have been two or more problems--especially since you seem to have run into the same error message that I was getting. One problem, I think, is that I was trying to run svnmerge from a subdirectory instead of from the root of my checkout. I simply did not notice that I was in the wrong directory until I had thrashed around for a while. There may be, or have been, a larger problem as well--maybe caused by me, maybe not. In any case, to get things working, I reran svnmerge init https://matplotlib.svn.sourceforge.net/svnroot/matplotlib/branches/v0_98_5_maint but it looks like I was still in the wrong directory when I did that, so I don't know if it had any useful effect. After getting into the right directory, some combination of svn up and svnmerge merge seemed to get everything straightened out, with a little editing to resolve conflicts along the way. That was last night, in the misty past. Now it looks like I am back with the original problem I started with last night, and which you also reported: efiring@manini:~/programs/py/mpl/mpl_trunk$ svnmerge avail /branches/v0_98_5_maint svnmerge: "/branches/v0_98_5_maint" is not a subversion working directory So, I'm baffled again. It is as if Jae-Joon's commit since mine of last night, and my corresponding "svn up" this morning, wiped out the svnmerge tracking info. I suspect a brief wave of Mike's magic wand tomorrow morning will clear away the fog. Eric |
From: John H. <jd...@gm...> - 2009-01-17 12:55:55
|
After reading in a separate thread that Eric was having trouble with svnmerge, I gave it a try and got jdhunter@uqbar:mpl> svnmerge.py merge v0_98_5_maint svnmerge: "v0_98_5_maint" is not a subversion working directory Maybe our svn merge guru (MD) could take a look and see if anything looks out of whack? JDH |
From: Jae-Joon L. <lee...@gm...> - 2009-01-17 10:29:56
|
Hi all, I added an (experimental) bbox_inches option for savefig in the trunk. If provided, only the specified area of the figure will be saved. bbox_inches can be "tight", and a tight bounding box is internally estimated (but this draws the figure twice). Take a look at "demo_tightbbox.py". The implementaion is a bit experimental, and it would be appreciated if others test and review the changes. When bbox_inches is given, savefig temporarily modifies fig.bbox, fig.bbox_inches and fig.transFigure._boxout. The algorithm to determine the tight bbox is rather primitive and should be improved. regards, -JJ |
From: Eric F. <ef...@ha...> - 2009-01-16 21:14:59
|
John Hunter wrote: [...] > > The code is trying to add a non-unitized quantity (eg an errorbar > width but just guessing) of int type with a unitized quantity > TaggedValue (this is from the mockup basic_units testing package). > You'd have to dig a little bit to find out where the non-unitized > quantity is entering. errorbar is a complex example that was once > (and I think still is) working on the 91 branch. You may want to > compare the errorbar function on the branch vs the trunk and see what > new feature and code change broke the units support. Again, it might > be cleaner to have an ErrorbarItem that stores the errorbar function > inputs and updates artist primitives on unit change rather than try > and propagate the original unitized data down to the artist layer. As > Eric suggested, doing these one at a time is probably a good idea, and > errorbar is a good test case because it is so damned hairy :-) One of the reasons for doing all the conversions at the higher level than the primitive artist is that often one *has* to do the conversion at that higher level in order to do the calculations required to draw the item; so a consequence of putting the conversion in the primitive artists is that the conversion facilities have to live at *both* levels, which makes the code harder to understand and maintain. The only penalty in taking the conversion out of the primitive artists is that a user who wants to support units in a custom plot type, using primitive artists, must include the unit conversion etc. I don't think this is a big problem for new code, though, because if the conversion is at that higher level only, then it is easy to show how to do it (there will be plenty of examples), and to ensure that there are enough helper functions to make it easy to code. Maybe there already are. Or maybe deriving from a PlotItem base class would make it easier. (Or maybe this is a place where decorators would simplify things? Just a random idea, not thought out.) Eric |
From: Ryan M. <rm...@gm...> - 2009-01-16 20:32:40
|
John Hunter wrote: > The code is trying to add a non-unitized quantity (eg an errorbar > width but just guessing) of int type with a unitized quantity > TaggedValue (this is from the mockup basic_units testing package). > You'd have to dig a little bit to find out where the non-unitized > quantity is entering. errorbar is a complex example that was once > (and I think still is) working on the 91 branch. You may want to > compare the errorbar function on the branch vs the trunk and see what > new feature and code change broke the units support. Again, it might > be cleaner to have an ErrorbarItem that stores the errorbar function > inputs and updates artist primitives on unit change rather than try > and propagate the original unitized data down to the artist layer. As > Eric suggested, doing these one at a time is probably a good idea, and > errorbar is a good test case because it is so damned hairy :-) Ok, so what I'm taking from your responses is that it's not a waste of time to fix these, but that it is likely more involved than something I can do when I have only a short time to hack. I'll file these away (though if anyone else feels motivated, feel free!) :) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Michael D. <md...@st...> - 2009-01-16 20:31:21
|
Michael Droettboom wrote: > Andrew Hawryluk wrote: > >> I’m really excited about the new path simplification option for vector >> output formats. I tried it the first time yesterday and reduced a PDF >> from 231 kB to 47 kB. Thanks very much for providing this feature! >> >> However, I have noticed that the simplified paths often look more >> jagged than the original, at least for my data. I can recreate the >> effect with the following: >> >> [start] >> >> import numpy as np >> >> import matplotlib.pyplot as plt >> >> x = np.arange(-3,3,0.001) >> >> y = np.exp(-x**2) + np.random.normal(scale=0.001,size=x.size) >> >> plt.plot(x,y) >> >> plt.savefig('test.png') >> >> plt.savefig('test.pdf') >> >> [end] >> >> A sample output is attached, and close inspection shows that the PNG >> is a smooth curve with a small amount of noise while the PDF version >> has very noticeable changes in direction from one line segment to the >> next. >> >> <<test.png>> <<test.pdf>> >> >> The simplification algorithm (agg_py_path_iterator.h) does the following: >> >> If line2 is nearly parallel to line1, add the parallel component to >> the length of line1, leaving it direction unchanged >> >> which results in a new data point, not contained in the original data. >> Line1 will continue to be lengthened until it has deviated from the >> data curve enough that the next true data point is considered >> non-parallel. The cycle then continues. The result is a line that >> wanders around the data curve, and only the first point is guaranteed >> to have existed in the original data set. >> >> Instead, could the simplification algorithm do: >> >> If line2 is nearly parallel to line1, combine them by removing the >> common point, leaving a single line where both end points existed in >> the original data >> I've attached a patch that will only include points from the original data in the simplified path. I hesitate to commit it to SVN, as these things are very hard to get right -- and just because it appears to work better on this data doesn't mean it doesn't create a regression on something else... ;) That said, it would be nice to confirm that this solution works, because it has the added benefit of being a little simpler computationally. Be sure to blitz your build directory when testing the patch -- distutils won't pick it up as a dependency. I've attached two PDFs -- one with the original (current trunk) behavior, and one with the new behavior. I plotted the unsimplified plot in thick blue behind the simplified plot in green, so you can see how much deviation there is between the original data and the simplified line (you'll want to zoom way in with your PDF viewer to see it.) I've also included a new version of your test script which detects "new" data values in the simplified path, and also seeds the random number generator so that results are comparable. I also set the solid_joinstyle to "round", as it makes the wiggliness less pronounced. (There was another thread on this list recently about making that the default setting). Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA |
From: John H. <jd...@gm...> - 2009-01-16 20:21:35
|
On Fri, Jan 16, 2009 at 2:02 PM, Ryan May <rm...@gm...> wrote: > Hi, > > In fixing the recursion bug in the units support, I went through the examples in > units/ and found two broken examples (broken before I fixed the recursion bug): > > 1) artist_tests.py > Traceback (most recent call last): > File "artist_tests.py", line 30, in <module> > lc = collections.LineCollection(verts, axes=ax) > File > "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/collections.py", line > 917, in __init__ > self.set_segments(segments) > File > "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/collections.py", line > 927, in set_segments > seg = np.asarray(seg, np.float_) > File "/home/rmay/.local/lib64/python2.5/site-packages/numpy/core/numeric.py", > line 230, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: setting an array element with a sequence. The collection is trying to explicitly cast to a float when creating the array instead of doing a conversion of the unit type first. The set_segments method should convert to float using the ax.convert_xunits before setting the array, and register to listen for a unit change so that if for example the axis units are changed from inches to cm, the segments are reset. Eg in Line2D, the set_axes method calls: def set_axes(self, ax): Artist.set_axes(self, ax) if ax.xaxis is not None: self._xcid = ax.xaxis.callbacks.connect('units', self.recache) if ax.yaxis is not None: self._ycid = ax.yaxis.callbacks.connect('units', self.recache) and later "recache":: def recache(self): #if self.axes is None: print 'recache no axes' #else: print 'recache units', self.axes.xaxis.units, self.axes.yaxis.units if ma.isMaskedArray(self._xorig) or ma.isMaskedArray(self._yorig): x = ma.asarray(self.convert_xunits(self._xorig), float) y = ma.asarray(self.convert_yunits(self._yorig), float) x = ma.ravel(x) y = ma.ravel(y) So the artist has to keep track of the original units data, and the converted value. For simple "unit" types like datetime, this is not so important because you convert once and you are done. For true unitized types like basic_unit where we can switch the axis from inches to cm, someone has to track the original unit data to convert it to floats on a unit change. In a prior thread, I indicated I thought the current implementation needs a rethinking, because it may be easier for everyone concerned if the original data storage and conversion happens at a higher layer, eg the hypothetical PlotItem layer. As Eric pointed out, this would have the added benefit of significantly thinning out the axes.py module code. > 2) bar_unit_demo.py > Traceback (most recent call last): > File "bar_unit_demo.py", line 15, in <module> > p1 = ax.bar(ind, menMeans, width, color='r', bottom=0*cm, yerr=menStd) > File "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/axes.py", line > 4134, in bar > fmt=None, ecolor=ecolor, capsize=capsize) > File "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/axes.py", line > 4678, in errorbar > in cbook.safezip(y,yerr)] > TypeError: unsupported operand type(s) for -: 'int' and 'TaggedValue' > > If anyone has any quick ideas on what might have gone wrong (or if these are just > outdated), let me know. Otherwise, I'll start digging... The code is trying to add a non-unitized quantity (eg an errorbar width but just guessing) of int type with a unitized quantity TaggedValue (this is from the mockup basic_units testing package). You'd have to dig a little bit to find out where the non-unitized quantity is entering. errorbar is a complex example that was once (and I think still is) working on the 91 branch. You may want to compare the errorbar function on the branch vs the trunk and see what new feature and code change broke the units support. Again, it might be cleaner to have an ErrorbarItem that stores the errorbar function inputs and updates artist primitives on unit change rather than try and propagate the original unitized data down to the artist layer. As Eric suggested, doing these one at a time is probably a good idea, and errorbar is a good test case because it is so damned hairy :-) JDH |
From: Ryan M. <rm...@gm...> - 2009-01-16 20:02:17
|
Hi, In fixing the recursion bug in the units support, I went through the examples in units/ and found two broken examples (broken before I fixed the recursion bug): 1) artist_tests.py Traceback (most recent call last): File "artist_tests.py", line 30, in <module> lc = collections.LineCollection(verts, axes=ax) File "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/collections.py", line 917, in __init__ self.set_segments(segments) File "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/collections.py", line 927, in set_segments seg = np.asarray(seg, np.float_) File "/home/rmay/.local/lib64/python2.5/site-packages/numpy/core/numeric.py", line 230, in asarray return array(a, dtype, copy=False, order=order) ValueError: setting an array element with a sequence. 2) bar_unit_demo.py Traceback (most recent call last): File "bar_unit_demo.py", line 15, in <module> p1 = ax.bar(ind, menMeans, width, color='r', bottom=0*cm, yerr=menStd) File "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/axes.py", line 4134, in bar fmt=None, ecolor=ecolor, capsize=capsize) File "/home/rmay/.local/lib64/python2.5/site-packages/matplotlib/axes.py", line 4678, in errorbar in cbook.safezip(y,yerr)] TypeError: unsupported operand type(s) for -: 'int' and 'TaggedValue' If anyone has any quick ideas on what might have gone wrong (or if these are just outdated), let me know. Otherwise, I'll start digging... Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Andrew S. <str...@as...> - 2009-01-16 19:54:07
|
Hmm, I tried "svnmerge.py avail" from the branch after committing to the trunk. I see now that I should have committed to the branch first (which seems an inversion to me). Duly noted for the future, though. Still working on seamless git-svn and svnmerge.py integration, Andrew John Hunter wrote: > On Fri, Jan 16, 2009 at 12:38 PM, Andrew Straw <str...@as...> wrote: >> John Hunter wrote: >>> Andrew, since you are the original author of the isnan port, could you >>> patch the branch and the trunk to take care of this? >> Done in r6791 and r6792. >> >> Sorry for the trouble. >> >> Now I just hope we don't get a problem with "long long", although now if >> _ISOC99_SOURCE is defined, we'll preferentially use "int64_t" out of >> <stdint.h>, so I should think this is more portable on sane platforms. >> >> This one of many reasons why I stick to Python... > > Thanks Andrew for applying this, and Georg, I forgot to mention in my > last post thanks especially for tracking down this nasty bug. Andrew, > for future reference, when fixing a bug on the branch, it is best to > svnmerge it onto the rather than committing it separately since > subsequent merges will bring it over an confuse the commit log. > Instructions at > > http://matplotlib.sourceforge.net/devel/coding_guide.html#using-svnmerge > > Thanks again! > JDH > |
From: John H. <jd...@gm...> - 2009-01-16 18:59:11
|
On Fri, Jan 16, 2009 at 12:38 PM, Andrew Straw <str...@as...> wrote: > John Hunter wrote: >> Andrew, since you are the original author of the isnan port, could you >> patch the branch and the trunk to take care of this? > > Done in r6791 and r6792. > > Sorry for the trouble. > > Now I just hope we don't get a problem with "long long", although now if > _ISOC99_SOURCE is defined, we'll preferentially use "int64_t" out of > <stdint.h>, so I should think this is more portable on sane platforms. > > This one of many reasons why I stick to Python... Thanks Andrew for applying this, and Georg, I forgot to mention in my last post thanks especially for tracking down this nasty bug. Andrew, for future reference, when fixing a bug on the branch, it is best to svnmerge it onto the rather than committing it separately since subsequent merges will bring it over an confuse the commit log. Instructions at http://matplotlib.sourceforge.net/devel/coding_guide.html#using-svnmerge Thanks again! JDH |
From: Andrew S. <str...@as...> - 2009-01-16 18:38:08
|
John Hunter wrote: > Andrew, since you are the original author of the isnan port, could you > patch the branch and the trunk to take care of this? Done in r6791 and r6792. Sorry for the trouble. Now I just hope we don't get a problem with "long long", although now if _ISOC99_SOURCE is defined, we'll preferentially use "int64_t" out of <stdint.h>, so I should think this is more portable on sane platforms. This one of many reasons why I stick to Python... -Andrew > > JDH > > On Fri, Jan 16, 2009 at 8:07 AM, George <gw...@em...> wrote: >> Hello. >> >> I am terribly sorry. I was mistaken last night. I had the latest Matplotlib >> version 0.98.5.2 and I thought the bug was fixed but it hasn't. Let me explain. >> >> In the file MPL_isnan.h line 26 there is a declaration: >> >> typedef long int MPL_Int64 >> >> This is fine for Linux 64-bit, but NOT for Windows XP 64-bit! For Windows the >> declaration should be: >> >> typedef long long MPL_Int64 >> >> This bug has caused me a LOT of late nights and last night was one of them. The >> declaration is correct for Linux 64-bit and I guess Matplotlib was developed on >> Linux because of this declaration. That is also why I thought the bug was fixed >> but this morning I realised that I was looking at the wrong console. >> >> So, in summary. For Matplotlib 0.98.5.2 and Numpy 1.2.1 to work without any >> problems. This means compiling and using Numpy and Matplotlib on Windows XP >> 64-bit using AMD 64-bit compile environment, change line 26 in the file >> MPL_isnan.h from long int to long long.\ >> >> I also previously suggested switching MKL and ACML etc. but with this change >> everything is fine. One can choose any math library and it works. >> >> Writing a small test application using sizeof on different platforms highlights >> the problem. >> >> Thanks. >> >> George. >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@sc... >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > |
From: Michael D. <md...@st...> - 2009-01-16 18:35:34
|
Since I suspect this change will be a little bit of work, I just wanted to put my hand up and say I'm looking into it so we don't duplicate effort here. I think it's a worthwhile experiment, in any case. Mike Andrew Hawryluk wrote: > > I’m really excited about the new path simplification option for vector > output formats. I tried it the first time yesterday and reduced a PDF > from 231 kB to 47 kB. Thanks very much for providing this feature! > > However, I have noticed that the simplified paths often look more > jagged than the original, at least for my data. I can recreate the > effect with the following: > > [start] > > import numpy as np > > import matplotlib.pyplot as plt > > x = np.arange(-3,3,0.001) > > y = np.exp(-x**2) + np.random.normal(scale=0.001,size=x.size) > > plt.plot(x,y) > > plt.savefig('test.png') > > plt.savefig('test.pdf') > > [end] > > A sample output is attached, and close inspection shows that the PNG > is a smooth curve with a small amount of noise while the PDF version > has very noticeable changes in direction from one line segment to the > next. > > <<test.png>> <<test.pdf>> > > The simplification algorithm (agg_py_path_iterator.h) does the following: > > If line2 is nearly parallel to line1, add the parallel component to > the length of line1, leaving it direction unchanged > > which results in a new data point, not contained in the original data. > Line1 will continue to be lengthened until it has deviated from the > data curve enough that the next true data point is considered > non-parallel. The cycle then continues. The result is a line that > wanders around the data curve, and only the first point is guaranteed > to have existed in the original data set. > > Instead, could the simplification algorithm do: > > If line2 is nearly parallel to line1, combine them by removing the > common point, leaving a single line where both end points existed in > the original data > > Thanks again, > > Andrew Hawryluk > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA |
From: Jeff W. <js...@fa...> - 2009-01-16 16:15:22
|
Michiel de Hoon wrote: > I've written a patch that fixes this bug; see > > https://sourceforge.net/tracker/?func=detail&atid=560722&aid=2508440&group_id=80706 > > --Michiel > Just commited your patch (SVN r6787) - thanks Michiel. -Jeff > > --- On Mon, 1/12/09, Tony Yu <ts...@gm...> wrote: > > >> From: Tony Yu <ts...@gm...> >> Subject: [matplotlib-devel] Jagged plot in macosx backend >> To: "matplotlib development list" <mat...@li...> >> Date: Monday, January 12, 2009, 2:59 PM >> There appears to be a bug in the macosx backend. When I plot >> large numbers with small variations in the value, the >> numbers seem to be coarsely rounded off. This bug >> doesn't appear with other backends (I tried WxAgg and >> TkAgg). Below is a simple script showing the problem and the >> resulting plot on the macosx backend. >> >> Thanks, >> -Tony >> >> Mac OS X 10.5.6 >> Matplotlib svn r6779 >> >> #~~~~~~~~ >> >> import numpy as np >> import matplotlib.pyplot as plt >> >> x = np.linspace(0, 1) >> plt.plot(x, x + 1e6) >> plt.show()------------------------------------------------------------------------------ >> This SF.net email is sponsored by: >> SourcForge Community >> SourceForge wants to tell your story. >> http://p.sf.net/sfu/sf-spreadtheword_______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jef...@no... 325 Broadway Office : Skaggs Research Cntr 1D-113 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg |
From: Michiel de H. <mjl...@ya...> - 2009-01-16 15:59:53
|
I've written a patch that fixes this bug; see https://sourceforge.net/tracker/?func=detail&atid=560722&aid=2508440&group_id=80706 --Michiel --- On Mon, 1/12/09, Tony Yu <ts...@gm...> wrote: > From: Tony Yu <ts...@gm...> > Subject: [matplotlib-devel] Jagged plot in macosx backend > To: "matplotlib development list" <mat...@li...> > Date: Monday, January 12, 2009, 2:59 PM > There appears to be a bug in the macosx backend. When I plot > large numbers with small variations in the value, the > numbers seem to be coarsely rounded off. This bug > doesn't appear with other backends (I tried WxAgg and > TkAgg). Below is a simple script showing the problem and the > resulting plot on the macosx backend. > > Thanks, > -Tony > > Mac OS X 10.5.6 > Matplotlib svn r6779 > > #~~~~~~~~ > > import numpy as np > import matplotlib.pyplot as plt > > x = np.linspace(0, 1) > plt.plot(x, x + 1e6) > plt.show()------------------------------------------------------------------------------ > This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword_______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Ryan M. <rm...@gm...> - 2009-01-15 23:11:24
|
John Hunter wrote: > On Thu, Jan 15, 2009 at 4:58 PM, Ryan May <rm...@gm...> wrote: > >> Ok, my debugging tells me the problem comes down to the units support, >> specifically this code starting at line 130 in units.py: >> >> if converter is None and iterable(x): >> # if this is anything but an object array, we'll assume >> # there are no custom units >> if isinstance(x, np.ndarray) and x.dtype != np.object: >> return None >> >> for thisx in x: >> converter = self.get_converter( thisx ) >> return converter >> >> Because a string is iterable, and even a single character is considered iterable, >> this code recurses forever. I can think this can be solved by, in addition to >> the iterable() check, make sure that x is not string like. If it is, this will >> return None as the converter. Somehow, this actually will then plot properly. >> I'm still trying to run down why this works, but I'm running out of time for the >> day. I will say that the data set for the line2D object is indeed a masked array >> of dtype ('|S4'). >> >> Anyone object to adding the check? > > Nope -- good idea Ok, I'll check it in when I have a chance to run backend_driver.py and makes sure nothing breaks. (Not that it should). I'll also take a crack at adding a test. For future reference, plotting lists/arrays of strings works (at least for lines) because Path calls .astype() on the arrays passed in, which will do the conversion for us. So (part of) matplotlib actually does support plotting sequences of string representations of numbers, it was just hindered by the unit check. >> In addition, why are we looping over thisx in x but returning inside the loop? >> Wouldn't this *always* be the same as x[0]? > > The loop works for generic iterables that are not indexable and also > for length 0 iterables Ok, that makes sense. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Eric F. <ef...@ha...> - 2009-01-15 23:07:42
|
Ryan May wrote: > Ok, my debugging tells me the problem comes down to the units support, > specifically this code starting at line 130 in units.py: > > if converter is None and iterable(x): > # if this is anything but an object array, we'll assume > # there are no custom units > if isinstance(x, np.ndarray) and x.dtype != np.object: > return None > > for thisx in x: > converter = self.get_converter( thisx ) > return converter > > Because a string is iterable, and even a single character is considered iterable, > this code recurses forever. I can think this can be solved by, in addition to > the iterable() check, make sure that x is not string like. If it is, this will Doing this check makes sense. > return None as the converter. Somehow, this actually will then plot properly. > I'm still trying to run down why this works, but I'm running out of time for the > day. I will say that the data set for the line2D object is indeed a masked array > of dtype ('|S4'). > > Anyone object to adding the check? > > In addition, why are we looping over thisx in x but returning inside the loop? > Wouldn't this *always* be the same as x[0]? > The idea is to base the converter selection on the first item instead of checking every item, which would be very slow. > Ryan > |
From: John H. <jd...@gm...> - 2009-01-15 23:05:21
|
On Thu, Jan 15, 2009 at 4:58 PM, Ryan May <rm...@gm...> wrote: > Ok, my debugging tells me the problem comes down to the units support, > specifically this code starting at line 130 in units.py: > > if converter is None and iterable(x): > # if this is anything but an object array, we'll assume > # there are no custom units > if isinstance(x, np.ndarray) and x.dtype != np.object: > return None > > for thisx in x: > converter = self.get_converter( thisx ) > return converter > > Because a string is iterable, and even a single character is considered iterable, > this code recurses forever. I can think this can be solved by, in addition to > the iterable() check, make sure that x is not string like. If it is, this will > return None as the converter. Somehow, this actually will then plot properly. > I'm still trying to run down why this works, but I'm running out of time for the > day. I will say that the data set for the line2D object is indeed a masked array > of dtype ('|S4'). > > Anyone object to adding the check? Nope -- good idea > In addition, why are we looping over thisx in x but returning inside the loop? > Wouldn't this *always* be the same as x[0]? The loop works for generic iterables that are not indexable and also for length 0 iterables JDH |
From: Ryan M. <rm...@gm...> - 2009-01-15 22:58:07
|
Ryan May wrote: > Neal Becker wrote: >> What's wrong here? >> This code snippet: >> >> from pylab import plot, show >> print Id >> print pout >> >> plot (Id, pout) >> show() >> >> produces: >> ['50', '100', '150', '200', '250', '300', '350', '400', '450', '500', '550', >> '600', '650', '700', '750', '800', '850', '900', '950', '1000', '1050'] >> ['0', '7.4', '11.4', '14.2', '16.3', '18.1', '19.3', '20.6', '21.6', '22.6', >> '23.4', '24.1', '24.9', '25.4', '26.1', '26.5', '26.9', '27.1', '27.3', >> '27.4', '27.4'] > > The problem here is that you're trying to plot lists of strings instead of lists > of numbers. You need to convert all of these values to numbers. However, > matplotlib could behave a bit more nicely in this case rather than simply > recursing until it hits the limit. Ok, my debugging tells me the problem comes down to the units support, specifically this code starting at line 130 in units.py: if converter is None and iterable(x): # if this is anything but an object array, we'll assume # there are no custom units if isinstance(x, np.ndarray) and x.dtype != np.object: return None for thisx in x: converter = self.get_converter( thisx ) return converter Because a string is iterable, and even a single character is considered iterable, this code recurses forever. I can think this can be solved by, in addition to the iterable() check, make sure that x is not string like. If it is, this will return None as the converter. Somehow, this actually will then plot properly. I'm still trying to run down why this works, but I'm running out of time for the day. I will say that the data set for the line2D object is indeed a masked array of dtype ('|S4'). Anyone object to adding the check? In addition, why are we looping over thisx in x but returning inside the loop? Wouldn't this *always* be the same as x[0]? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Ryan M. <rm...@gm...> - 2009-01-15 15:52:38
|
Neal Becker wrote: > What's wrong here? > This code snippet: > > from pylab import plot, show > print Id > print pout > > plot (Id, pout) > show() > > produces: > ['50', '100', '150', '200', '250', '300', '350', '400', '450', '500', '550', > '600', '650', '700', '750', '800', '850', '900', '950', '1000', '1050'] > ['0', '7.4', '11.4', '14.2', '16.3', '18.1', '19.3', '20.6', '21.6', '22.6', > '23.4', '24.1', '24.9', '25.4', '26.1', '26.5', '26.9', '27.1', '27.3', > '27.4', '27.4'] The problem here is that you're trying to plot lists of strings instead of lists of numbers. You need to convert all of these values to numbers. However, matplotlib could behave a bit more nicely in this case rather than simply recursing until it hits the limit. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Sandro T. <mo...@de...> - 2009-01-14 17:31:26
|
On Wed, Jan 14, 2009 at 18:22, John Hunter <jd...@gm...> wrote: > On Wed, Jan 14, 2009 at 10:26 AM, Sandro Tosi <mo...@de...> wrote: >> I was wondering that, for the time being, I could upload to >> experimental: developers, do you have any plan to release .3 soon? If >> not, and upload to our "experimental" area (that's a sort of unstable, >> but with packages not ready for prime time) could be an easy way for >> our users to have mpl on Debian system. Let me know, the package is >> ready, just needs the upload. > > Well, I've been planning to do it for some time now but have just been > too busy. It's high on my list of things to do though! :) I supposed the situation was something similar. Well, given the package sit waiting since a long time, and even Ubuntu expressed the wish to pull a new version from Debian, I think I'll upload to experimental this night. Once ready, I'll package and upload .3 too. Thanks for your work! Cheers, -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi |
From: John H. <jd...@gm...> - 2009-01-14 17:22:26
|
On Wed, Jan 14, 2009 at 10:26 AM, Sandro Tosi <mo...@de...> wrote: > On Wed, Jan 14, 2009 at 14:14, John Travers <jt...@gm...> wrote: >> On Wed, Jan 14, 2009 at 8:10 AM, Sandro Tosi <mo...@de...> wrote: >>> Hi all, >>> due to some requests came lately, I decided to upload the "temp" >>> Debian package for 0.98.5.2. >>> >>> They are available at [1]. >> >> Any chance you could add them to your apt repository? > > I was wondering that, for the time being, I could upload to > experimental: developers, do you have any plan to release .3 soon? If > not, and upload to our "experimental" area (that's a sort of unstable, > but with packages not ready for prime time) could be an easy way for > our users to have mpl on Debian system. Let me know, the package is > ready, just needs the upload. Well, I've been planning to do it for some time now but have just been too busy. It's high on my list of things to do though! JDH |