You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
| 2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
| 2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
| 2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
| 2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
| 2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
| 2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
| 2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
| 2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
| 2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
| 2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
| 2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
| 2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(12) |
2
(10) |
3
(1) |
4
(12) |
5
(1) |
|
6
(4) |
7
(2) |
8
(30) |
9
(10) |
10
(14) |
11
(6) |
12
(1) |
|
13
(11) |
14
(14) |
15
(2) |
16
|
17
(1) |
18
|
19
|
|
20
(1) |
21
(2) |
22
(2) |
23
(3) |
24
(1) |
25
(3) |
26
|
|
27
|
28
|
|
|
|
|
|
|
From: John H. <jdh...@ac...> - 2005-02-08 23:50:43
|
>>>>> "John" == John Hunter <jdh...@ac...> writes:
John> Gee, I'm starting to get the feeling your the first person
John> who's every actually *used* matplotlib :-(
No, but maybe you're the only one who is stress-testing CVS :-)
This is another CVS bug, introduced when I let the figure store it's
canvas, in response to a request by ChrisBarker to be able to do
fig.savefig('blah)
which forwards the call on to the canvas.
I was failing to reset the canvas instance for a given figure after a
call to switch_backends (which is what happens when you save a ps fig
from agg).
This is (should be) fixed in CVS.
JDH
|
|
From: Jochen V. <vo...@se...> - 2005-02-08 23:44:06
|
Hello, On Tue, Feb 08, 2005 at 01:39:29PM -0700, Fernando Perez wrote: > Well, it's true that if there is no easy way to 'reopen' the window in it= s=20 > exact previous state, Ctrl-q might be a better solution. In gnuplot, an= =20 > accidental close is just a 'replot' command away, so nothing is ever lost. we could at least have "q" if show is run from a script and more use the more complicated key presses only if it is used interactively. This would be very convenient. All the best, Jochen --=20 http://seehuhn.de/ |
|
From: Jochen V. <vo...@se...> - 2005-02-08 23:42:25
|
Hello,
On Tue, Feb 08, 2005 at 02:45:05PM -0600, John Hunter wrote:
> Actually, I think the right way to do it me to save the x and y arrays
> as postscript arrays, and then generate the postscript for loop to
> iterate over the arrays at postscript render time. ...
> The major feature of postscript that the mpl backend
> currently underutilizes, dare I say almost ignores, is that postscript
> is a programming language. ...
=46rom reading parts of the green book ("PostScript Language Program Design=
")
I gathered the impression that the preferred way of generating PostScript
is to do most of the calculations before writing the PostScript.
Some references from the green book:
p.9: It is better to translate into the PostScript imaging mod-
el than to maintain another set of graphics primitives
using the PostScript language for
p. 80: Remember that the PostScript language is interpreted,
and that it is a programming language only as one
aspect of its design. It is best not to defer calculation
(such as division problems, computing the diameter of a
circle, or figuring the length of some text) to the inter-
preter. Instead, perform these calculations on the host
system as the script is being generated, providing the
data to the procedures in the format expected by the
PostScript language and the individual operators used.
The section "Computation" starting at p. 83 has a similar stance.
The computers matplotlib runs on are probabily many times faster
than the ones built into printers and every PostScript file is
generated only once but potentially printed many times. Therefore
I would rather spend additional CPU cycles in backend_ps.py to
get a compact and computationally simple PostScript file.
Thus I do not think that the "PostScript loop" mentioned above is
a good idea.
> I'll leave it to Jochen to decide on the patch and/or apply it -- on
> my quick read the changes look sensible, and since it passes
> backend_driver, it must be good!
The first version of the patch looked good to me.
Unfortunately I cannot spend any time on this before Thursday.
I suggest that you just apply the patch to CVS and I have
a look at the result on Thursday.
All the best and thank you very much and good night,
Jochen
--=20
http://seehuhn.de/
|
|
From: John H. <jdh...@ac...> - 2005-02-08 23:08:12
|
>>>>> "Fernando" == Fernando Perez <Fer...@co...> writes:
Fernando> Something is remembering state where it shouldn't, and
Fernando> after a call to savefig() with an eps extension, further
Fernando> pngs are clobbered into eps. Is this a misuse of the
Fernando> library on my part? I read the savefig docstring, and I
Fernando> don't see an error in my usage.
Gee, I'm starting to get the feeling your the first person who's every
actually *used* matplotlib :-(
I'll look into it...
JDH
|
|
From: Fernando P. <Fer...@co...> - 2005-02-08 22:58:15
|
Hi all,
this looks wrong to me:
In [1]: plot([1,2,3])
Out[1]: [<matplotlib.lines.Line2D instance at 0x410dafec>]
In [2]: savefig('foo.png')
In [3]: !file foo.png
foo.png: PNG image data, 812 x 612, 8-bit/color RGBA, non-interlaced
In [4]: savefig('foo.eps')
In [5]: !file foo.eps
foo.eps: PostScript document text conforming at level 3.0 - type EPS
So far so good. BUT:
In [6]: savefig('foo.png')
In [7]: !file foo.png
foo.png: PostScript document text conforming at level 3.0
Something is remembering state where it shouldn't, and after a call to
savefig() with an eps extension, further pngs are clobbered into eps. Is this
a misuse of the library on my part? I read the savefig docstring, and I don't
see an error in my usage.
Cheers,
f
|
|
From: Fernando P. <Fer...@co...> - 2005-02-08 21:05:52
|
John Hunter wrote:
> Yes, 800k is a bit large. Well, if you get something nice and usable,
> be sure to send it my way!
I don't know if it applies to this case, but in general for smaller file
sizes, it would be nice to have pylab's load/save commands support
automatically gzipped files, no? It's not as efficient as using binary files
directly, but in many cases it may be 'small enough'.
Since it's quicker for stuff like this to code a patch implementing it rather
than beg for it, here it goes.
A quick test:
In [1]: x=frange(0,2*pi,npts=1000)
In [2]: y=sin(x)
In [3]: save('arr.dat.gz',(x,y))
In [4]: arr=load('arr.dat.gz')
In [5]: x1,y1 = arr[0],arr[1]
In [6]: l2norm(x-x1)
Out[6]: 0.0
In [7]: l2norm(y-y1)
Out[7]: 0.0
Apply if you like it.
Regards,
f
|
|
From: John H. <jdh...@ac...> - 2005-02-08 20:55:28
|
>>>>> "Fernando" == Fernando Perez <Fer...@co...> writes:
Fernando> But since I don't know whether numeric/numarray provide
Fernando> fully consistent array2str functions (I only have
Fernando> numeric to test with), I'm a bit afraid of touching this
Fernando> part. It's also possible that John's backend
Fernando> architectural changes end up modifying this, so perhaps
Fernando> such changes are best thought about after John finishes
Fernando> his reorganization.
No, now is a good time. Right now I'm proposing the *addition* of a
new method to the backend, with the *eventual* deletion of redundant
methods. draw_lines is one of those core methods that I think
deserves to remain and be highly optimized.
Fernando> However, I'm afraid to rewrite this in a low-level way,
Fernando> because of the numeric/numarray difference. The right
Fernando> approach for this would be to generate a string
Fernando> representation of the array via numeric/numarray, which
Fernando> can do it in C. And then, that can be modified to add
Fernando> the m/l end of line markers on the first/rest lines via
Fernando> a (fast) python string operation.
Actually, I think the right way to do it me to save the x and y arrays
as postscript arrays, and then generate the postscript for loop to
iterate over the arrays at postscript render time. Then we have
no loop in python and smaller file sizes. I think this approach in
general could be a big win in PS both in terms of file size and file
creation times. The major feature of postscript that the mpl backend
currently underutilizes, dare I say almost ignores, is that postscript
is a programming language.
I'll leave it to Jochen to decide on the patch and/or apply it -- on
my quick read the changes look sensible, and since it passes
backend_driver, it must be good!
JDH
|
|
From: Fernando P. <Fer...@co...> - 2005-02-08 20:43:23
|
Jochen Voss wrote:
> I will fix set_linejoin and set_linecap separately at the moment, but
> I am also not so happy with the other changes as mentioned above.
OK, I updated to CVS, which fixed the PS problems, many thanks. I then
reapplied my patch and fixed it a bit here and there. Here's the new version,
against current CVS.
There is no significant win in backends_driver (a couple percent points at
best), but no regression either. And in a few places, the code avoids O(N^2)
operations, which are the kind of thing that can blow up in your face
unexpectedly with a large dataset of the right kind, yet go totally unnoticed
in typical usage. I ran the whole backends_driver and looked at the PS, they
seem OK to me. Feel free to apply if you are OK with it.
The two routines where there might be really significant gains are
def _draw_lines(self, gc, points):
"""
Draw many lines. 'points' is a list of point coordinates.
"""
# inline this for performance
ps = ["%1.3f %1.3f m" % points[0]]
ps.extend(["%1.3f %1.3f l" % point for point in points[1:] ])
self._draw_ps("\n".join(ps), gc, None)
def draw_lines(self, gc, x, y):
"""
x and y are equal length arrays, draw lines connecting each
point in x, y
"""
if debugPS:
self._pswriter.write("% lines\n")
start = 0
end = 1000
points = zip(x,y)
while 1:
to_draw = points[start:end]
if not to_draw:
break
self._draw_lines(gc,to_draw)
start = end
end += 1000
Currrently, these zip a pair of arrays and then format them into text
manually. This means resorting to python loops over lists of tuples,
something bound to be terribly inefficient. I can imagine for plots with many
thousands of lines, this being quite slow.
However, I'm afraid to rewrite this in a low-level way, because of the
numeric/numarray difference. The right approach for this would be to generate
a string representation of the array via numeric/numarray, which can do it in
C. And then, that can be modified to add the m/l end of line markers on the
first/rest lines via a (fast) python string operation.
But since I don't know whether numeric/numarray provide fully consistent
array2str functions (I only have numeric to test with), I'm a bit afraid of
touching this part. It's also possible that John's backend architectural
changes end up modifying this, so perhaps such changes are best thought about
after John finishes his reorganization.
But I think the patch is a safe, small cleanup to apply now.
Cheers,
f
|
|
From: John H. <jdh...@ac...> - 2005-02-08 20:41:36
|
>>>>> "Jochen" == Jochen Voss <vo...@se...> writes:
Jochen> Hello John, thanks a lot for your help!!! The maths still
Jochen> does not work, but mathplotlib plots my non-working
Jochen> datasets now in a beautiful way. I updated my file at
Jochen> http://seehuhn.de/data/as1.png
Jochen> if you are curious.
Thanks -- btw, this looks like a good case to use (and test!) the
sharex feature now in CVS. See examples/shared_axis_demo.py. The
basic idea is that if your multiple subplots are over the same x
range, you can set sharex and then navigation on one access is
automatically shared with the others, as are changes in tick
locations, etc...
Jochen> I would be very happy to provide something similar as an
Jochen> example. My data is not proprietary but huge. Let me see
Jochen> whether I can come up with a picture that does not require
Jochen> the full data-set but looks similar.
Yes, 800k is a bit large. Well, if you get something nice and usable,
be sure to send it my way!
JDH
|
|
From: Fernando P. <Fer...@co...> - 2005-02-08 20:39:33
|
John Hunter wrote: > I'd like to here from Steve, who along with Fernando is a resident UI > design expert, about whether this is a good idea. I'm still smarting > after he made me remove the close button from the toolbar :-) Well, it's true that if there is no easy way to 'reopen' the window in its exact previous state, Ctrl-q might be a better solution. In gnuplot, an accidental close is just a 'replot' command away, so nothing is ever lost. If no such thing can easily be done in pylab (like a reopen() command, or somesuch), it's perhaps safer to put a slight barrier in front of the closing command. In that case, Ctrl-w may be more UI-consistent, as Ctrl-q is generally used for closing a whole app, while Ctrl-w closes an individual window. Best, f |
|
From: John H. <jdh...@ac...> - 2005-02-08 20:31:28
|
>>>>> "Fernando" == Fernando Perez <Fer...@co...> writes:
Fernando> +1 for 'q', it's the one gnuplot keybinding I forgot to
Fernando> mention to John in a private message yesterday about
Fernando> further gnuplot keybindings.
FYI, in case you want to experiment with various keybindins, in
backend_bases.py the FigureManagerBase.key_press method can be easily
extended. Eg, to bind 'q' to close the window
def key_press(self, event):
# these bindings happen whether you are over an axes or not
if event.key == 'q':
self.destroy() # how cruel to have to destroy oneself!
return
if event.inaxes is None: return
# the mouse has to be over an axes to trigger these
if event.key == 'g':
event.inaxes.grid()
self.canvas.draw()
elif event.key == 'l':
event.inaxes.toggle_log_lineary()
self.canvas.draw()
I'd like to here from Steve, who along with Fernando is a resident UI
design expert, about whether this is a good idea. I'm still smarting
after he made me remove the close button from the toolbar :-)
JDH
|
|
From: Jochen V. <vo...@se...> - 2005-02-08 20:21:29
|
Hello John,
thanks a lot for your help!!!
The maths still does not work, but mathplotlib plots my
non-working datasets now in a beautiful way. I updated
my file at
http://seehuhn.de/data/as1.png
if you are curious.
I would be very happy to provide something similar as an example. My
data is not proprietary but huge. Let me see whether I can come up
with a picture that does not require the full data-set but looks
similar.
All the best,
Jochen
--=20
http://seehuhn.de/
|
|
From: Fernando P. <Fer...@co...> - 2005-02-08 20:20:09
|
Jochen Voss wrote: > Hello, > > do you think it would be possible to make the "q" key (as in gnuplot) > or "Ctrl-q" (as in many other programs) close the matplotlib window? > Currently I always have to catch the mouse to close the window, which > for a fast mouse and a slow Jochen is a little bit cumbersome. +1 for 'q', it's the one gnuplot keybinding I forgot to mention to John in a private message yesterday about further gnuplot keybindings. I hate programs which require an extra Ctrl- modifier when the window has no explicit text input, hence no need whatsoever for the Ctrl- (other than foolish consistency, which as we all know is the hobgoblin of little minds :) This is one of the things I love about gv, xpdf, gnuplot and ImageMagick's display: they all get out of your way with a simple 'q' (quite a few other 'old school' unix utils have this, while all of the newfangled gnome/kde stuff sticks mindlessly to Ctrl- modifiers even where they make no sense whatsoever). If Ctrl-q is enabled _in addition_ to plain 'q', that's OK by me. Best, f |
|
From: Jochen V. <vo...@se...> - 2005-02-08 20:09:56
|
Hello, do you think it would be possible to make the "q" key (as in gnuplot) or "Ctrl-q" (as in many other programs) close the matplotlib window? Currently I always have to catch the mouse to close the window, which for a fast mouse and a slow Jochen is a little bit cumbersome. All the best, Jochen --=20 http://seehuhn.de/ |
|
From: John H. <jdh...@ac...> - 2005-02-08 19:45:53
|
>>>>> "Jochen" == Jochen Voss <vo...@se...> writes:
Jochen> http://seehuhn.de/data/as1.tar.gz
OK, you were right. That was easy :-)
title is an axes command
figure(figsize=(10,7),dpi=100)
title(title_str)
implicitly makes a call to gca(), which creates subplot(111) by
default. Move the title command to below the first subplot call and
your problem will be fixed.
matlab handles this situation on calls to subplot by deleting other
subplots that overlap the one just generated. This would probably be
the most user friendly way to do it -- you could still make
overlapping Axes by either using the axes command or the
fig.add_subplot command, but perhaps in the pylab interface we should
help out by doing the thing that is right 99% of the time. With the
philosophy that if you truly want overlapping axes, you're enough of a
guru to use axes to get it. Rarely would you intentionally generate
overlapping axes with the subplot command, because it doesn't offer
enough precision.
BTW, see examples/figtext.py for an example of how to create a figure
title, as opposed to an axes title.
JDH
|
|
From: Jochen V. <vo...@se...> - 2005-02-08 19:32:19
|
Hello John,
On Tue, Feb 08, 2005 at 11:17:45AM -0600, John Hunter wrote:
> Could you tar up a complete example (with data files if necessary)
> that I can run on my end.
Sure, you can find it at
http://seehuhn.de/data/as1.tar.gz
I will first spend some time to find mathematical reasons why the
red and green lines do not properly fall into the red and green bands
(they are supposed to), and then try to disable caching and to run
it with the older matplotlib version.
Thanks for you help,
Jochen
PS.: from looking at the output again I guess that the problem is caused
by an additional coordinate frame drawn around the pictue which overlaps
the frames for the subplots. ???
--=20
http://seehuhn.de/
|
|
From: John H. <jdh...@ac...> - 2005-02-08 17:56:34
|
>>>>> "Paul" == Paul Barrett <ba...@st...> writes:
Paul> Though the idea of having a minimal set of draw methods is
Paul> esthetically appealing, in practice having to reuse these
Paul> few commonly drawn methods to construct simple compound
Paul> objects can become cumbersome and annoying. I would suggest
Paul> taking another look at Adobe's PDF language and Apple's
Paul> Quartz (or display PDF). It is iteresting to see that each
Paul> new version adds one or two new draw methods, e.g. rectangle
Paul> and rectangles, to make it easier and faster to draw these
Paul> common paths. I'm guessing these new methods are based on
Paul> copius experience.
I think the ideas has a lot of merit -- it's in keeping with the Tufte
philosophy "copy the great architectures" --
http://www.uwex.edu/ces/susag/Design/tufte.html -- which I tend to
agree with. The current design is built on the GTK drawing model,
which is good but not great.
The clear benefits of your suggestion that I see are that 1) it would
make matplotlib work as a front end to the kiva renderer, which might
ease it's integration into envisage, 2) we would get a PDF backend and
Aqua backends for free, and 3) it would also free us from having to
think about the design -- as you note that problem hasalready been
solved.
But there are some downsides too.The problem is that Quartz is a high
level drawing API. Forcing every backend to implement the Quartz API
in order to qualify as a matplotlib backend is a fairly high barrier
to entry. My idea is that the matplotlib front end should implement
these convenience functions, which they already do in patch.Rectangle,
patch.Circle, Line2D._draw_plus and so on, and the life of the backend
implementer should be as easy as possible. They draw the primitives
and the front end does the rest.
You are right that doing everything with draw_path can be less
efficient, but I feel like we have enough experience under our belt at
this point to know where the bottlenecks are. If the matplotlib front
end code is designed right, we won't need to make 18 million calls to
draw_rectangle -- in which case you want your rectangle drawing
function to be damned efficient -- we'll call draw_markers or use a
polygon collection.
We could have the best of all worlds of course. If we implement the
Quartz API and have all the base class methods call the low level
backend method draw_path, allowing different backends to override
selected methods where desired for performance, then we preserve the
low barrier to entry -- just implement draw path and a few more bits
-- and get kiva compatibility, a pdf backend, an aqua backend and a
solid backend API which we don't have to think a lot about. It's also
a lot more work, of course, which I don't really have time for right
now. But if you're offering.... I will take another look at it
though.
As an aside, a funny story. After talking to you at scipy about the
Quartz API, I ordered the PDF reference from Adobe on Amazon.
Apparently, Amazon had changed the defaults in my one-click settings
and the book went to my mom by default. My mom likes computers, but
the PDF specification is a bit much for her. Perplexed, she first
called up my sister and asked her if she knew what was going on and my
sister replied, "John told me he was sending you a book he was really
excited about", referring to an unrelated conversation between us. So
then my mom called me, trying to be thankful for the book I had sent
her. When I finally figured out what was going on and explained to
her that she had gotten the PDF spec by mistake, she was audibly
relieved.
Paul> While you are at it, I would suggest looking at the clipping
Paul> issue. As in PS and PDF, a path can be used as a clipping
Paul> region. This allows the backend, in PS and PDF, to do the
Paul> clipping for you, which can make the code simpler and also
Paul> faster. Clipping using arbitrary paths may currently be an
Paul> issue for AGG, but in my opinion, it is something that AGG
Paul> will eventually have to come to grips with.
Arbitrary clipping paths are a priority for me, and agg does support
them now with the scanline boolean algebra -- see the recently update
http://matplotlib.sf.net/goals.html page for more information and
links. Implementing paths in matplotlib is a first step to supporting
arbitrary clipping paths, which are currently needed for Jeffrey
Whittaker's basemap extension.
JDH
|
|
From: John H. <jdh...@ac...> - 2005-02-08 17:28:04
|
>>>>> "Jochen" == Jochen Voss <vo...@se...> writes:
Jochen> Hello, I tried the following code with the current CVS
Jochen> version of matplotlib.
Your code looks fine on quick inspection.
Could you tar up a complete example (with data files if necessary)
that I can run on my end.
Also, as a quick test, in text.py in the _get_layout method, comment
out the line
if self.cached.has_key(key): return self.cached[key]
I've seen bugs that look like this before which resulted from me not
having a hash key that uniquely specified the text instance, resulting
it being drawn in the wrong place. I thought I fixed this though.
Also, do you get this with matplotlib-0.71 and CVS -- I've been doing
a lot of tinkering, not always for the best, as you've seen.
Also, if the data is not proprietary, you might consider submitting
this to the examples directory, because it is a nice illustration of
how to use fill to indicate time varying ranges.
JDH
|
|
From: Jochen V. <vo...@se...> - 2005-02-08 17:16:55
|
Hello,
I tried the following code with the current CVS version of matplotlib.
figure(figsize=3D(10,7),dpi=3D100)
title(title_str)
subplot(311)
f=3Dfill(list(t)+list(t[::-1]),list(etn1+dev1)+list((etn1-dev1)[::-1]),
facecolor=3D(1.0,0.8,0.8),lw=3D0)
plot(t,sig,"r")
subplot(312)
plot(t,sin(obs),"r")
plot(t,cos(obs),"g")
subplot(313)
f=3Dfill(list(t)+list(t[::-1]),list(etn2+dev2)+list((etn2-dev2)[::-1]),
facecolor=3D(0.8,1.0,0.8),lw=3D0)
plot(t,s2,"g")
savefig("as1.png")
The result is displayed at
http://seehuhn.de/data/as1.png
Unfortunately the labels to the left and at the bottom are weird.
Numbers in wrong places, sometimes overlapping.
Did I do something wrong or is this a problem with matplotlib?
All the best,
Jochen
--=20
http://seehuhn.de/
|
|
From: Paul B. <ba...@st...> - 2005-02-08 16:21:19
|
John Hunter wrote: > > I've begin to address some of these concerns with a new backend method > "draw_markers". Currently the backend has too many drawing methods, > and this is yet another one. My goal is to define a core set, many > fewer than we have today, and do away with most of them. Eg > draw_pixel, draw_line, draw_point, draw_rectangle, draw_polygon, can > all be replaced by draw_path, with paths comprised solely of MOVETO, > LINETO and (optionally) ENDPOLY. Though the idea of having a minimal set of draw methods is esthetically appealing, in practice having to reuse these few commonly drawn methods to construct simple compound objects can become cumbersome and annoying. I would suggest taking another look at Adobe's PDF language and Apple's Quartz (or display PDF). It is iteresting to see that each new version adds one or two new draw methods, e.g. rectangle and rectangles, to make it easier and faster to draw these common paths. I'm guessing these new methods are based on copius experience. While you are at it, I would suggest looking at the clipping issue. As in PS and PDF, a path can be used as a clipping region. This allows the backend, in PS and PDF, to do the clipping for you, which can make the code simpler and also faster. Clipping using arbitrary paths may currently be an issue for AGG, but in my opinion, it is something that AGG will eventually have to come to grips with. Just my $0.01. -- Paul -- Paul Barrett, PhD Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Branch FAX: 410-338-4767 Baltimore, MD 21218 |
|
From: John H. <jdh...@ac...> - 2005-02-08 15:01:49
|
In dealing with the profiler output from some of Fernando's log plots,
I was reminded of the very inefficient way matplotlib handles marker
plots -- see my last post "log scaling fixes in backend_ps.py" for
details. Some of these problems were fixed for scatter plots using
collections, but line markers remained inefficient.
On top of this inefficiency, there have been three lingering problems
with backend design that have bothered me. 1) No path operations
(MOVETO, LINETO, etc), 2) transforms are being done in the front end
which is inefficient (some backends have transformations for free, eg
postscript) and can lead to plotting artifacts (eg Agg, which has a
concept of subpixel rendering), and 3) backends have no concept of
state or a gc stack, which can lead to lots of receptive code and
needless function calls.
I've begin to address some of these concerns with a new backend method
"draw_markers". Currently the backend has too many drawing methods,
and this is yet another one. My goal is to define a core set, many
fewer than we have today, and do away with most of them. Eg
draw_pixel, draw_line, draw_point, draw_rectangle, draw_polygon, can
all be replaced by draw_path, with paths comprised solely of MOVETO,
LINETO and (optionally) ENDPOLY.
Which leads me to question one for the backend maintainers: can you
support a draw_path method? I'm not sure that GTK and WX can. I have
no idea about FLTK, and QT, but both of these are Agg backends so it
doesn't matter. All the Agg backends automagically get these for
free. I personally would be willing to lose the native GTK and WX
backends.
I've implemented draw_markers for agg in CVS. lines.py tests for this
on the renderer so it doesn't break any backend code right now.
Basically if you implement draw_markers, it calls it, otherwise it
does what it always did. This leads to complicated code in lines.py
that I hope to flush when the port to the other backends is complete.
draw_markers is the start of fixing backend problems 1 and 2 above.
Also, I will extend the basic path operations to support splines,
which will allow for more sophisticated drawing, and better
representations of circles -- eg Knuth uses splines to draw circles in
TeX) which are a very close approximation to real circles.
I'm not putting this in backend_bases yet since I'm using the presence
of the method as the test for whether a backend is ported yet in
lines.py
def draw_markers(self, gc, path, x, y, transform):
path is a list of path elements -- see matplotlib.paths. Right now
path is only a data structure, which suffices for my simple needs
right now, but we can provide user friendly methods to facilitate the
generation of these lists down the road.
The coordinates of the "path" argument in draw_markers are display (eg
points for PS) and are simply points*dpi (this could be generalized if
need be with its own transform, but I don't see a need right now --
markers in matplotlib are by definition in points). x and y are in
data coordinates, and transform is a matplotlib.transform
Transformation instance. There are a few types of transformations
(separable, nonseparable and affine) but all three have a consistent
interface -- there is an (optional) nonlinear component, eg log or
polar -- and all provide an affine vec 6. Thus the transformation can
be done like
if transform.need_nonlinear():
x,y = transform.nonlinear_only_numerix(x, y)
# the a,b,c,d,tx,ty affine which transforms x and y
vec6 = transform.as_vec6_val()
# apply an affine transformation of x and y
This setup buys us a few things -- for large data sets, it can save
the cost of doing the transformation for backends that have
transformations built in (eg ps, when the transformation happens at
rendering). For agg, it saves the number of passes through the data
since the transformation happens on the rendering loop, which it has
to make anyway. It also allows agg to try/except the nonlinear
transformation part, and drop data points which throw a domain_error
(nonpositive log). This means you can toggle log/linear axes with the
'l' command and won't raise even if you have nonpositive data on the
log axes.
Most importantly it buys you speed, since the graphics context is
marker path one need to be set once, outside the loop, and then you
can iterate over the x,y position vertices and draw that marker at
each position. This results in a 10x performance boost for large
numbers of markers in agg
Old
N=001000: 0.24 s
N=005000: 0.81 s
N=010000: 1.30 s
N=050000: 5.97 s
N=100000: 11.46 s
N=500000: 56.87 s
New:
N=001000: 0.13 s
N=005000: 0.19 s
N=010000: 0.28 s
N=050000: 0.66 s
N=100000: 1.04 s
N=500000: 4.51 s
agg implements this in extension code, which might be harder for
backend writers to follow as an example. So I wrote a template in
backend ps, which I named _draw_markers -- the underscore prevents it
from actually being called by lines.py. It is basically there to show
other backend writers how to iterate over the data structures and use
the transform
def _draw_markers(self, gc, path, x, y, transform):
"""
I'm underscore hiding this method from lines.py right now
since it is incomplete
Draw the markers defined by path at each of the positions in x
and y. path coordinates are points, x and y coords will be
transformed by the transform
"""
if debugPS:
self._pswriter.write("% markers\n")
if transform.need_nonlinear():
x,y = transform.nonlinear_only_numerix(x, y)
# the a,b,c,d,tx,ty affine which transforms x and y
vec6 = transform.as_vec6_val()
# this defines a single vertex. We need to define this as ps
# function, properly stroked and filled with linewidth etc,
# and then simply iterate over the x and y and call this
# function at each position. Eg, this is the path that is
# relative to each x and y offset.
ps = []
for p in path:
code = p[0]
if code==MOVETO:
mx, my = p[1:]
ps.append('%1.3f %1.3f m')
elif code==LINETO:
mx, my = p[1:]
ps.append('%1.3f %1.3f l')
elif code==ENDPOLY:
fill = p[1]
if fill: # we can get the fill color here
rgba = p[2:]
vertfunc = 'some magic ps function that draws the marker relative to an x,y point'
# the gc contains the stroke width and color as always
for i in xrange(len(x)):
# for large numbers of markers you may need to chunk the
# output, eg dump the ps in 1000 marker batches
thisx = x[i]
thisy = y[i]
# apply affine transform x and y to define marker center
#draw_marker_here
print 'I did nothing!'
For PS specifically, ideally we would define a native postscript
function for the path, and call this function for each vertex. Can
you insert PS functions at arbitrary points in PS code, or do they
have to reside in the header? If the former, we may want to buffer
the input with stringio to save the functions we need, since we don't
know until runtime which functions we'll be defining.
OK, give it a whirl. Nothing is set in stone so feel free to comment
on the design. I think we could get away with just a few backend
methods:
# draw_lines could be done with a path but we may want to special
# case this for maximal performance
draw_lines
draw_markers
draw_path
... and I'll probably leave the collection methods...
Ted Drain mentioned wanting draw_ellipse for high resolution ellipse
drawing (eg or using discrete vertices). I'm not opposed to it, but I
wonder if the spline method of drawing ellipses referred to above
might not suffice here. In which case draw_ellipse would be subsumed
under draw_path.
Although what I've done is incomplete, I thought it might be better to
get something in CVS to give other backend writers time to implement
it, and to get some feedback before finishing the refactor.
Also, any feedback on the idea of removing GD, native GTK and native
WX are welcome. I'll bounce this off the user list in any case.
JDH
|
|
From: John H. <jdh...@ac...> - 2005-02-08 13:17:42
|
>>>>> "Jochen" == Jochen Voss <vo...@se...> writes:
Jochen> Hello, revisions 1.26 and 1.27 introduced some changes
Jochen> into the PS backend which I do not quite understand. The
Jochen> log messages are "some log optimizations" and "more log
Jochen> scaling fixes". What is the problem they try to fix?
Jochen> Most of the changes consist in replacements like
Jochen> def set_linewidth(self, linewidth): if linewidth !=
Jochen> self.linewidth: - self._pswriter.write("%s
Jochen> setlinewidth\n"%_num_to_str(linewidth)) +
Jochen> self._pswriter.write("%1.3f setlinewidth\n"%linewidth)
Jochen> self.linewidth = linewidth
Jochen> with the result that a linewidth of 2 is now emitted as
Jochen> 2.000 instead of 2. Was this done deliberately? I guess
Jochen> that it can result in a significant increase in PostScript
Jochen> file size. It also broke set_linejoin and set_linecap,
Jochen> which expect integer arguments :-(
I broke this -- thanks for finding it and fixing it -- too much coffee
I guess. Basically, set_linejoin and set_linewidth happen inside a
very nasty inner loop. In lines.py we do (well, "did" -- see next
post)
def _draw_plus(self, renderer, gc, xt, yt):
offset = 0.5*renderer.points_to_pixels(self._markersize)
for (x,y) in zip(xt, yt):
renderer.draw_line(gc, x-offset, y, x+offset, y)
renderer.draw_line(gc, x, y-offset, x, y+offset)
draw_line in backend ps does
def draw_line(self, gc, x0, y0, x1, y1):
"""
Draw a single line from x0,y0 to x1,y1
"""
ps = '%1.3f %1.3f m %1.3f %1.3f l'%(x0, y0, x1, y1)
self._draw_ps(ps, gc, None, "line")
and the _draw_ps behemoth makes 18 million function calls, including
set_linewidth and friends. To help ease the pain a little, I inlined
_num_to_str, and reduced some of the string processing (eg no
stripping of trailing zeros and '.') . I made a number of changes in
ps along these lines, trading a marginally larger file size for fewer
function calls, but screwed up in the process.
Thanks for covering my back!
See followup post for more on this problem.
JDH
|
|
From: Jochen V. <vo...@se...> - 2005-02-08 11:24:37
|
Hello, On Tue, Feb 08, 2005 at 11:01:05AM +0000, Jochen Voss wrote: > I guess that it can result in a significant increase in PostScript > file size. Actually it is only a moderate increase. The total generated output of backend_driver.py grows by 3.3% (from 33623818 to 34725498 bytes). Fixes for set_linecap and set_linejoin are in CVS. I hope this helps, Jochen --=20 http://seehuhn.de/ |
|
From: Jochen V. <vo...@se...> - 2005-02-08 11:01:02
|
Hello,
revisions 1.26 and 1.27 introduced some changes into the PS backend
which I do not quite understand. The log messages are "some log
optimizations" and "more log scaling fixes". What is the problem
they try to fix?
Most of the changes consist in replacements like
def set_linewidth(self, linewidth):
if linewidth !=3D self.linewidth:
- self._pswriter.write("%s setlinewidth\n"%_num_to_str(linewidth=
))
+ self._pswriter.write("%1.3f setlinewidth\n"%linewidth)
self.linewidth =3D linewidth
with the result that a linewidth of 2 is now emitted as 2.000 instead
of 2. Was this done deliberately? I guess that it can result in a
significant increase in PostScript file size. It also broke
set_linejoin and set_linecap, which expect integer arguments :-(
I will fix set_linejoin and set_linecap separately at the moment, but
I am also not so happy with the other changes as mentioned above.
All the best,
Jochen
--=20
http://seehuhn.de/
|
|
From: Fernando P. <Fer...@co...> - 2005-02-08 07:15:01
|
Fernando Perez wrote: > Jochen Voss wrote: >>The pstest.py script uses nothing dangerous looking and gv processes >>the output fine for me. Does your gv literally crash or just emit an >>error message for the generated PostScript? OK, I can confirm that the problem is in backend_ps in CVS. I backed off _just that file_ to the one in 0.71, and it works. So PS generation seems to be broken in CVS, for all I can tell. Sorry but I can't track this down further right now. Best, f |