You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(13) |
2
(2) |
3
(9) |
4
(16) |
5
(3) |
6
(4) |
7
(2) |
8
(1) |
9
|
10
(7) |
11
(8) |
12
(9) |
13
|
14
(4) |
15
(5) |
16
(7) |
17
(12) |
18
|
19
(1) |
20
|
21
|
22
(3) |
23
(2) |
24
(2) |
25
|
26
|
27
(2) |
28
|
29
(4) |
30
|
31
|
|
|
|
|
|
From: Paul I. <piv...@gm...> - 2012-12-05 21:46:04
|
Hey everyone, In adding a deprecation warning in this pull request [1], Damon and I learned that DeprecationWarnings are ignored by default in Python 2.7 This is rather unfortunate, and I outlined possible workarounds that I see in a commend on that PR [2]. In trying to rectify this situation, I have created a MatplotlibDeperecationWarning class that inherits from UserWarning, which is *not* ignored by default. [3] 1. https://github.com/matplotlib/matplotlib/pull/1535 2. https://github.com/matplotlib/matplotlib/pull/1535#issuecomment-11061572 3. https://github.com/matplotlib/matplotlib/pull/1565 Any feedback is appreciated, better, -- Paul Ivanov 314 address only used for lists, off-list direct email at: http://pirsquared.org | GPG/PGP key id: 0x0F3E28F7 |
From: Eric F. <ef...@ha...> - 2012-12-04 22:32:46
|
On 2012/12/04 12:07 PM, Damon McDougall wrote: > On Mon, Dec 3, 2012 at 12:12 PM, Chris Barker - NOAA Federal > <chr...@no...> wrote: >> generated code is ugly and hard to maintain, it is not designed to be >> human-readable, and we wouldn't get the advantages of bug-fixes >> further development in Cython. > > As far as I'm concerned, this is an argument against Cython. Nonsense. It is an argument against the idea of maintaining the generated code directly, rather than maintaining the cython source code and regenerating the C code as needed. That idea never made any sense in the first place. I doubt that anyone follows it. Chris already pointed this out. Would you maintain the assembly code generated by your C++ compiler? Do you consider the fact that this is unreadable and unmaintainable a reason to avoid using that compiler, and instead to code directly in assembly? > > I've had to touch the C/C++/ObjC codebase. It was not automatically > generated by Cython and it's not that hard to read. There's almost > certainly a C/C++/ObjC expert around to help out. There's almost > certainly Cython experts to help out, too. There is almost certainly > *not* an expert in Cython-generated C code that is hard to read. > There doesn't need to be. > I vote raw Python/C API. Managing reference counters is not the > mundane task pythonistas make it out to be, in my opinion. If you know > ObjC, you've had to do your own reference counting. If you know C, > you've had to do your own memory management. If you know C++, you've > had to do your own new/delete (or destructor) management. I agree not > having to worry about reference counting is nice positive, but I don't > think it outweighs the negatives. You have completely misrepresented the negatives. > > It seems to me that Cython is a 'middle-man' tool, with the added > downside of hard-to-maintain under-code. > Please, if you don't use Cython yourself, and therefore don't know it well, refrain from these sorts of criticisms. In normal cython use, one *never* modifies the code it generates. In developing with cython, one *might* read this code to find out what is going on, and especially to find out whether one inadvertently triggered a call to the python API by forgetting to declare a variable, for example. This is pretty easy, because the comments in the generated code show exactly which source line has generated each chunk of generated code. Context is included. It is very nicely done. Eric |
From: Ryan M. <rm...@gm...> - 2012-12-04 22:20:26
|
On Tue, Dec 4, 2012 at 4:07 PM, Damon McDougall <dam...@gm...>wrote: > On Mon, Dec 3, 2012 at 12:12 PM, Chris Barker - NOAA Federal > <chr...@no...> wrote: > > generated code is ugly and hard to maintain, it is not designed to be > > human-readable, and we wouldn't get the advantages of bug-fixes > > further development in Cython. > > As far as I'm concerned, this is an argument against Cython. > > I've had to touch the C/C++/ObjC codebase. It was not automatically > generated by Cython and it's not that hard to read. There's almost > certainly a C/C++/ObjC expert around to help out. There's almost > certainly Cython experts to help out, too. There is almost certainly > *not* an expert in Cython-generated C code that is hard to read. > You've had to touch the C/C++/ObjC because that's the only source that exists; in this case that's the C *is* the implementation of the wrapper. If we go Cython, the cython source is all that is maintained. It may be useful to glance at generated code, but no-one should be tweaking it by hand--the Cython source, and only the Cython source, represents the implementation of the wrapper. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Damon M. <dam...@gm...> - 2012-12-04 22:07:58
|
On Mon, Dec 3, 2012 at 12:12 PM, Chris Barker - NOAA Federal <chr...@no...> wrote: > generated code is ugly and hard to maintain, it is not designed to be > human-readable, and we wouldn't get the advantages of bug-fixes > further development in Cython. As far as I'm concerned, this is an argument against Cython. I've had to touch the C/C++/ObjC codebase. It was not automatically generated by Cython and it's not that hard to read. There's almost certainly a C/C++/ObjC expert around to help out. There's almost certainly Cython experts to help out, too. There is almost certainly *not* an expert in Cython-generated C code that is hard to read. I vote raw Python/C API. Managing reference counters is not the mundane task pythonistas make it out to be, in my opinion. If you know ObjC, you've had to do your own reference counting. If you know C, you've had to do your own memory management. If you know C++, you've had to do your own new/delete (or destructor) management. I agree not having to worry about reference counting is nice positive, but I don't think it outweighs the negatives. It seems to me that Cython is a 'middle-man' tool, with the added downside of hard-to-maintain under-code. -- Damon McDougall http://www.damon-is-a-geek.com Institute for Computational Engineering Sciences 201 E. 24th St. Stop C0200 The University of Texas at Austin Austin, TX 78712-1229 |
From: Benjamin R. <ben...@ou...> - 2012-12-04 18:21:01
|
On Tue, Dec 4, 2012 at 1:10 PM, Michael Droettboom <md...@st...> wrote: > I think I see what's happened. I accidentally committed a #define in > there when I was experimenting last week with removing deprecated Numpy > APIs. It didn't cause things to break for me, but it looks like it could > break things for more recent Numpy's. I've just gone ahead and reverted my > change. Let me know if that fixes things for you when you get a chance. > > Cheers, > Mike > > Looks like that was the problem. The build is now successful. Thanks! Ben Root |
From: Michael D. <md...@st...> - 2012-12-04 18:10:33
|
I think I see what's happened. I accidentally committed a #define in there when I was experimenting last week with removing deprecated Numpy APIs. It didn't cause things to break for me, but it looks like it could break things for more recent Numpy's. I've just gone ahead and reverted my change. Let me know if that fixes things for you when you get a chance. Cheers, Mike On 12/04/2012 10:50 AM, Benjamin Root wrote: > > > On Tue, Dec 4, 2012 at 10:43 AM, Michael Droettboom <md...@st... > <mailto:md...@st...>> wrote: > > It looks like we're using the "old" Numpy API there. Did you > recently update Numpy by any chance? I hadn't realised these APIs > had been turned off yet, but maybe they are in git master. In any > event, we should update these to the new APIs (NPY_UBYTE instead > of PyArray_UBYTE etc.). > > Cheers, > Mike > > > Not since Nov. 5th (which was a fix for a bug I reported in numpy > master. So, I was using numpy 1.8.0 dev branch. > > Cheers! > Ben Root > |
From: Benjamin R. <ben...@ou...> - 2012-12-04 15:51:12
|
On Tue, Dec 4, 2012 at 10:43 AM, Michael Droettboom <md...@st...> wrote: > It looks like we're using the "old" Numpy API there. Did you recently > update Numpy by any chance? I hadn't realised these APIs had been turned > off yet, but maybe they are in git master. In any event, we should update > these to the new APIs (NPY_UBYTE instead of PyArray_UBYTE etc.). > > Cheers, > Mike > > Not since Nov. 5th (which was a fix for a bug I reported in numpy master. So, I was using numpy 1.8.0 dev branch. Cheers! Ben Root |
From: Michael D. <md...@st...> - 2012-12-04 15:43:53
|
It looks like we're using the "old" Numpy API there. Did you recently update Numpy by any chance? I hadn't realised these APIs had been turned off yet, but maybe they are in git master. In any event, we should update these to the new APIs (NPY_UBYTE instead of PyArray_UBYTE etc.). Cheers, Mike On 12/04/2012 09:46 AM, Benjamin Root wrote: > I can't seem to build v1.2.x branch right now on CentOS6. This has > not been a problem before. I get the following error message while > trying to build the freetype2 stuff: > > creating build/temp.linux-x86_64-2.7/CXX > gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall > -fPIC -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 > -I/usr/local/include -I/usr/include > -I/nas/home/broot/centos6/lib/python2.7/site-packages/numpy/core/include > -I/usr/include/freetype2 -I/usr/local/include -I/usr/include -I. > -I/home/broot/.local_centos6/include/python2.7 -c src/ft2font.cpp -o > build/temp.linux-x86_64-2.7/src/ft2font.o > src/ft2font.cpp: In member function 'Py::Object > FT2Image::py_as_array(const Py::Tuple&)': > src/ft2font.cpp:388: error: 'PyArray_UBYTE' was not declared in this scope > src/ft2font.cpp: In member function 'Py::Object FT2Font::get_path()': > src/ft2font.cpp:626: error: 'PyArray_DOUBLE' was not declared in this > scope > src/ft2font.cpp:632: error: 'PyArray_UINT8' was not declared in this scope > error: command 'gcc' failed with exit status 1 > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Benjamin R. <ben...@ou...> - 2012-12-04 14:47:19
|
I can't seem to build v1.2.x branch right now on CentOS6. This has not been a problem before. I get the following error message while trying to build the freetype2 stuff: creating build/temp.linux-x86_64-2.7/CXX gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/usr/local/include -I/usr/include -I/nas/home/broot/centos6/lib/python2.7/site-packages/numpy/core/include -I/usr/include/freetype2 -I/usr/local/include -I/usr/include -I. -I/home/broot/.local_centos6/include/python2.7 -c src/ft2font.cpp -o build/temp.linux-x86_64-2.7/src/ft2font.o src/ft2font.cpp: In member function ‘Py::Object FT2Image::py_as_array(const Py::Tuple&)’: src/ft2font.cpp:388: error: ‘PyArray_UBYTE’ was not declared in this scope src/ft2font.cpp: In member function ‘Py::Object FT2Font::get_path()’: src/ft2font.cpp:626: error: ‘PyArray_DOUBLE’ was not declared in this scope src/ft2font.cpp:632: error: ‘PyArray_UINT8’ was not declared in this scope error: command 'gcc' failed with exit status 1 |
From: Michael D. <md...@st...> - 2012-12-04 13:52:47
|
Also -- this feedback is really helpful when writing some comments in the wrappers as to why certain things are the way they are... I'll make sure to include rationales for raw file fast path and the need to open the files on the Python side. Mike On 12/04/2012 08:45 AM, Michael Droettboom wrote: > On 12/03/2012 08:01 PM, Chris Barker - NOAA Federal wrote: >> On Mon, Dec 3, 2012 at 4:16 PM, Nathaniel Smith <nj...@po...> wrote: >> >>> Yeah, this is a general problem with the Python file API, trying to >>> hook it up to stdio is not at all an easy thing. A better version of >>> this code would skip that altogether like: >>> >>> cdef void write_to_pyfile(png_structp s, png_bytep data, png_size_t count): >>> fobj = <object>png_get_io_ptr(s) >>> pydata = PyString_FromStringAndSize(data, count) >>> fobj.write(pydata) >> Good point -- not at all Cython-specific, but do you need libpng (or >> whatever) to write to the file? can you just get a buffer with the >> encoded data and write it on the Python side? Particularly if the user >> wants to pass in an open file object. This might be a better API for >> folks that might want stream an image right through a web app, too. > You need to support both: raw C FILE objects for speed, and writing to a > Python file-like object for flexibility. The code in master already > does this (albeit with PyCXX), and the code on my "No CXX" branch does > this as well with Cython. >> As a lot of Python APIs take either a file name or a file-like object, >> perhaps it would make sense to push that distinction down to the >> Cython level: >> -- if it's a filename, open it with raw C > Unfortunately, as stated in detail in my last e-mail, that doesn't work > with Unicode paths. > >> -- if it's a file-like object, have libpng write to a buffer (bytes >> object) , and pass that to the file-like object in Python > libpng does one better and allows us to stream directly to a callback > which can then write to a Python object. This prevents double > allocation of memory. > >> anyway, not really a Cython issue, but that second object sure would >> be easy on Cython.... >> > Yeah -- once I figured out how to make a real C callback function from > Cython, the contents of the callback function itself is pretty easy to > write. > > Mike > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Michael D. <md...@st...> - 2012-12-04 13:46:02
|
On 12/03/2012 08:01 PM, Chris Barker - NOAA Federal wrote: > On Mon, Dec 3, 2012 at 4:16 PM, Nathaniel Smith <nj...@po...> wrote: > >> Yeah, this is a general problem with the Python file API, trying to >> hook it up to stdio is not at all an easy thing. A better version of >> this code would skip that altogether like: >> >> cdef void write_to_pyfile(png_structp s, png_bytep data, png_size_t count): >> fobj = <object>png_get_io_ptr(s) >> pydata = PyString_FromStringAndSize(data, count) >> fobj.write(pydata) > Good point -- not at all Cython-specific, but do you need libpng (or > whatever) to write to the file? can you just get a buffer with the > encoded data and write it on the Python side? Particularly if the user > wants to pass in an open file object. This might be a better API for > folks that might want stream an image right through a web app, too. You need to support both: raw C FILE objects for speed, and writing to a Python file-like object for flexibility. The code in master already does this (albeit with PyCXX), and the code on my "No CXX" branch does this as well with Cython. > > As a lot of Python APIs take either a file name or a file-like object, > perhaps it would make sense to push that distinction down to the > Cython level: > -- if it's a filename, open it with raw C Unfortunately, as stated in detail in my last e-mail, that doesn't work with Unicode paths. > -- if it's a file-like object, have libpng write to a buffer (bytes > object) , and pass that to the file-like object in Python libpng does one better and allows us to stream directly to a callback which can then write to a Python object. This prevents double allocation of memory. > > anyway, not really a Cython issue, but that second object sure would > be easy on Cython.... > Yeah -- once I figured out how to make a real C callback function from Cython, the contents of the callback function itself is pretty easy to write. Mike |
From: Michael D. <md...@st...> - 2012-12-04 13:43:58
|
On 12/03/2012 07:16 PM, Nathaniel Smith wrote: > On Mon, Dec 3, 2012 at 11:50 PM, Chris Barker - NOAA Federal > <chr...@no...> wrote: >> On Mon, Dec 3, 2012 at 2:21 PM, Nathaniel Smith <nj...@po...> wrote: >>> For the file handle, I would just write >>> >>> cdef FILE *fp = fdopen(file_obj.fileno(), "w") >>> >>> and be done with it. This will work with any version of Python etc. >> yeah, that makes sense -- though what if you want to be able to >> read_to/write_from a file that is already open, and in the middle of >> the file somewhere -- would that work? >> >> I just posted a question to the Cython list, and indeed, it looks like >> there is no easy answer to the file issue. > Yeah, this is a general problem with the Python file API, trying to > hook it up to stdio is not at all an easy thing. A better version of > this code would skip that altogether like: > > cdef void write_to_pyfile(png_structp s, png_bytep data, png_size_t count): > fobj = <object>png_get_io_ptr(s) > pydata = PyString_FromStringAndSize(data, count) > fobj.write(pydata) > > cdef void flush_pyfile(png_structp s): > # Not sure if this is even needed > fobj = <object>png_get_io_ptr(s) > fobj.flush() > > # in write_png: > write_png_c(<png_byte*>pix_buffer, width, height, > NULL, <void*>file_obj, write_to_pyfile, flush_pyfile, dpi) This is what my original version already does in the event that the file_obj is not a "real" file. In practice, you need to support both methods -- the callback approach is many times slower than writing directly to a regular old FILE object, because there is overhead both at the libpng and Python level, and there's no way to select a good buffer size. > > But this is a separate issue :-) (and needs further fiddling to make > exception handling work). > > Or if you're only going to work on real OS-level file objects anyway, > you might as well just accept a filename as a string and fopen() it > locally. Having Python do the fopen just makes your life harder for no > reason. There's actually a very good reason. It is difficult to deal with Unicode in file paths from C in a portable way. On Windows, for example, if the user's name contains non-ascii characters, you can't write to the home directory using fopen, etc. It's doable with some care by using platform-specific C APIs etc., but CPython has already done all of the hard work for us, so it's easiest just to leverage that by opening the file from Python. Mike |
From: Michael D. <md...@st...> - 2012-12-04 13:37:32
|
On 12/03/2012 07:00 PM, Chris Barker - NOAA Federal wrote: > On Mon, Dec 3, 2012 at 12:24 PM, Chris Barker - NOAA Federal > <chr...@no...> wrote: > >>>>> but some of that complexity could be reduced by using Numpy arrays in place >>> It would at least make this a more fair comparison to have the Cython >>> code as Cythonic as possible. However, I couldn't find any ways around >>> using these particular APIs -- other than the Numpy stuff which probably >>> does have a more elegant solution in the form of Cython arrays and >>> memory views. > OK -- so I poked at it, and this is my (very untested) version of > write_png (I left out the py3 stuff, though it does look like it may > be required for file handling... > > Letting Cython unpack the numpy array is the real win. Maybe having it > this simple won't work for MPL, but this is what my code tends to look > like. > > > def write_png(cnp.ndarray[cnp.uint32, ndim=2, mode="c" ] buff not None, > file_obj, > double dpi=0.0): > > cdef png_uint_32 width = buff.size[0] > cdef png_uint_32 height = buff.size[1] > > if PyFile_CheckExact(file_obj): > cdef FILE *fp = fdopen(file_obj.fileno(), "w") > fp = PyFile_AsFile(file_obj) > write_png_c(buff[0,0], width, height, fp, > NULL, NULL, NULL, dpi) > return > else: > raise TypeError("write_png only works with real PyFileObject") > > > NOTE: that could be: > > cnp.ndarray[cnp.uint8, ndim=3, mode="c" ] > > I'm not sure how MPL stores image buffers. > > or you could accept any object, then call: > > np.view() The buffer comes in both ways, so the latter solution seems like the thing to do. Thanks for working this through. This sort of thing is very helpful. We can also, of course, maintain the existing code that allows writing to an arbitrary file-like object, but this fast path (where it is a "real" file) is very important. It's significantly faster than calling methods on Python objects. Mike |
From: Eric F. <ef...@ha...> - 2012-12-04 03:41:52
|
On 2012/12/03 4:54 AM, Michael Droettboom wrote: > I think Cython is well suited to writing new algorithmic code to speed > up hot spots in Python code. I don't think it's as well suited as glue > between C and Python -- that was not a main goal of the original Pyrex > project, IIRC. It feels kind of tacked on and not a very good fit to > the problem. Not entirely relevant to the PyCXX discussion, but to avoid misleading others reading this discussion, I must strongly disagree with your assertion about Cython's usefulness for wrapping C libraries or small chunks of C. I think this has always been a primary function of Cython and Pyrex, as far back as I have been aware of them. I wrote the raw interface to our contouring code, and I have written cython interfaces to various chunks of C outside of mpl; and cython makes it much easier for a non-professional programmer such as myself. So I am not arguing that Cython should be the choice for removing PyCXX, but for non-wizards, it can work very well as glue. It is much more approachable than any alternative of which I am aware. For Fortran, of course, f2py plays this glue code generation role. Eric |
From: Chris B. - N. F. <chr...@no...> - 2012-12-04 01:02:28
|
On Mon, Dec 3, 2012 at 4:16 PM, Nathaniel Smith <nj...@po...> wrote: > Yeah, this is a general problem with the Python file API, trying to > hook it up to stdio is not at all an easy thing. A better version of > this code would skip that altogether like: > > cdef void write_to_pyfile(png_structp s, png_bytep data, png_size_t count): > fobj = <object>png_get_io_ptr(s) > pydata = PyString_FromStringAndSize(data, count) > fobj.write(pydata) Good point -- not at all Cython-specific, but do you need libpng (or whatever) to write to the file? can you just get a buffer with the encoded data and write it on the Python side? Particularly if the user wants to pass in an open file object. This might be a better API for folks that might want stream an image right through a web app, too. As a lot of Python APIs take either a file name or a file-like object, perhaps it would make sense to push that distinction down to the Cython level: -- if it's a filename, open it with raw C -- if it's a file-like object, have libpng write to a buffer (bytes object) , and pass that to the file-like object in Python anyway, not really a Cython issue, but that second object sure would be easy on Cython.... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Nathaniel S. <nj...@po...> - 2012-12-04 00:16:39
|
On Mon, Dec 3, 2012 at 11:50 PM, Chris Barker - NOAA Federal <chr...@no...> wrote: > On Mon, Dec 3, 2012 at 2:21 PM, Nathaniel Smith <nj...@po...> wrote: >> For the file handle, I would just write >> >> cdef FILE *fp = fdopen(file_obj.fileno(), "w") >> >> and be done with it. This will work with any version of Python etc. > > yeah, that makes sense -- though what if you want to be able to > read_to/write_from a file that is already open, and in the middle of > the file somewhere -- would that work? > > I just posted a question to the Cython list, and indeed, it looks like > there is no easy answer to the file issue. Yeah, this is a general problem with the Python file API, trying to hook it up to stdio is not at all an easy thing. A better version of this code would skip that altogether like: cdef void write_to_pyfile(png_structp s, png_bytep data, png_size_t count): fobj = <object>png_get_io_ptr(s) pydata = PyString_FromStringAndSize(data, count) fobj.write(pydata) cdef void flush_pyfile(png_structp s): # Not sure if this is even needed fobj = <object>png_get_io_ptr(s) fobj.flush() # in write_png: write_png_c(<png_byte*>pix_buffer, width, height, NULL, <void*>file_obj, write_to_pyfile, flush_pyfile, dpi) But this is a separate issue :-) (and needs further fiddling to make exception handling work). Or if you're only going to work on real OS-level file objects anyway, you might as well just accept a filename as a string and fopen() it locally. Having Python do the fopen just makes your life harder for no reason. -n |
From: Chris B. - N. F. <chr...@no...> - 2012-12-04 00:01:34
|
On Mon, Dec 3, 2012 at 12:24 PM, Chris Barker - NOAA Federal <chr...@no...> wrote: >>>> but some of that complexity could be reduced by using Numpy arrays in place >> It would at least make this a more fair comparison to have the Cython >> code as Cythonic as possible. However, I couldn't find any ways around >> using these particular APIs -- other than the Numpy stuff which probably >> does have a more elegant solution in the form of Cython arrays and >> memory views. OK -- so I poked at it, and this is my (very untested) version of write_png (I left out the py3 stuff, though it does look like it may be required for file handling... Letting Cython unpack the numpy array is the real win. Maybe having it this simple won't work for MPL, but this is what my code tends to look like. def write_png(cnp.ndarray[cnp.uint32, ndim=2, mode="c" ] buff not None, file_obj, double dpi=0.0): cdef png_uint_32 width = buff.size[0] cdef png_uint_32 height = buff.size[1] if PyFile_CheckExact(file_obj): cdef FILE *fp = fdopen(file_obj.fileno(), "w") fp = PyFile_AsFile(file_obj) write_png_c(buff[0,0], width, height, fp, NULL, NULL, NULL, dpi) return else: raise TypeError("write_png only works with real PyFileObject") NOTE: that could be: cnp.ndarray[cnp.uint8, ndim=3, mode="c" ] I'm not sure how MPL stores image buffers. or you could accept any object, then call: np.view() -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Chris B. - N. F. <chr...@no...> - 2012-12-03 23:51:10
|
On Mon, Dec 3, 2012 at 2:21 PM, Nathaniel Smith <nj...@po...> wrote: > For the file handle, I would just write > > cdef FILE *fp = fdopen(file_obj.fileno(), "w") > > and be done with it. This will work with any version of Python etc. yeah, that makes sense -- though what if you want to be able to read_to/write_from a file that is already open, and in the middle of the file somewhere -- would that work? I just posted a question to the Cython list, and indeed, it looks like there is no easy answer to the file issue. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Nathaniel S. <nj...@po...> - 2012-12-03 22:53:53
|
On Mon, Dec 3, 2012 at 8:24 PM, Chris Barker - NOAA Federal <chr...@no...> wrote: > On Mon, Dec 3, 2012 at 11:59 AM, Michael Droettboom <md...@st...> wrote: >> so there >> are types in libpng, for example, that we don't actually know the size >> of. They are different on different platforms. In C, you just include >> the header. In Cython, I'd have to determine the size of the types in a >> pre-compilation step, or manually determine their sizes and hard code >> them for the platforms we care about. > > yeah -- this is a tricky problem, however, I think you can follow what > you'd do in C -- i.e. presumable the header define their own data > types: png_short or whatever. The actually definition is filled in by > the pre-processor. So I wonder if you can declare those types in > Cython, then have it write C code that uses those types, and it all > gets cleared up at compile time -- maybe. The key is that when you > declare stuff in Cython, that declaration is used to determine how to > write the C code, I don't think the declarations themselves are > translated. Yeah, this isn't an issue in Cython, it's a totally standard thing (though perhaps not well documented). When you write cdef extern from "png.h": ctypedef int png_short or whatever, what you are saying is "the C compiler knows about a type called png_short, which acts in an int-like fashion, so Cython, please use your int rules when dealing with it". So this means that Cython will know that if you return a png_short from a python function, it should insert a call to PyInt_FromLong (or maybe PyInt_FromSsize_t? -- cython worries about these things so I don't have to). But Cython only takes care of the Python<->C interface. It will leave the C compiler to actually allocate the appropriate memory for png_shorts, perform C arithmetic, coerce a png_short into a 'long' when necessary, etc. It's kind of mind-bending to wrap your head around, and it definitely does help to spend some time reading the C code that Cython spits out to understand how the mapping works (it's both more and less magic than it looks -- Python stuff gets carefully expanded, C stuff goes through almost verbatim), but the end result works amazingly well. >> It would at least make this a more fair comparison to have the Cython >> code as Cythonic as possible. However, I couldn't find any ways around >> using these particular APIs -- other than the Numpy stuff which probably >> does have a more elegant solution in the form of Cython arrays and >> memory views. > > yup -- that's what I noticed right away -- I"m note sure it there is > easier handling of file handles. For the file handle, I would just write cdef FILE *fp = fdopen(file_obj.fileno(), "w") and be done with it. This will work with any version of Python etc. -n |
From: Chris B. - N. F. <chr...@no...> - 2012-12-03 20:25:51
|
On Mon, Dec 3, 2012 at 11:59 AM, Michael Droettboom <md...@st...> wrote: >>> but some of that complexity could be reduced by using Numpy arrays in place of the >>> image buffer types that each of them contain >> OR Cython arrays and/or memoryviews -- this is indeed a real strength of Cython. > > Sure, but when we return to Python, they should be Numpy arrays which > have more methods etc. -- or am I missing something? Cython makes it really easy to switch between ndarrays and memoryviews, etc -- it's a question of what you want to work with in your code, so you have write a function that takes numpy arrays and returns numpy arrays, but uses a memoryview internally (and passes to C code that way). But I'm not an expert on this, I'mve found that I'm either doing simplestuff where using numpy arrays directly works fine, or passing the pointer to the data array off to C: def a_function_to_call_C( cnp.ndarray[double, ndim=2, mode="c" ] in_array ): """ calls the_c_function, altering the array in-place """ cdef int m, n m = in_array.size[0] m = in_array.size[1] the_c_function( &in_array[0], m, n ) >> It does support the C99 fixed-width integer types: >> from libc.stdint cimport int16_t, int32_t, >> > The problem is that Cython can't actually read the C header, yeah, this is a pity. There has been some work on auto-generating Cython from C headers, though nothing mature. For my work, I've been considering writing some simple pyd-generating code, just to make sure my data types are inline with the C++ as it may change. > so there > are types in libpng, for example, that we don't actually know the size > of. They are different on different platforms. In C, you just include > the header. In Cython, I'd have to determine the size of the types in a > pre-compilation step, or manually determine their sizes and hard code > them for the platforms we care about. yeah -- this is a tricky problem, however, I think you can follow what you'd do in C -- i.e. presumable the header define their own data types: png_short or whatever. The actually definition is filled in by the pre-processor. So I wonder if you can declare those types in Cython, then have it write C code that uses those types, and it all gets cleared up at compile time -- maybe. The key is that when you declare stuff in Cython, that declaration is used to determine how to write the C code, I don't think the declarations themselves are translated. > It would at least make this a more fair comparison to have the Cython > code as Cythonic as possible. However, I couldn't find any ways around > using these particular APIs -- other than the Numpy stuff which probably > does have a more elegant solution in the form of Cython arrays and > memory views. yup -- that's what I noticed right away -- I"m note sure it there is easier handling of file handles. > True. We do have two categories of stuff using PyCXX in matplotlib: > things that (primarily) wrap third-party C/C++ libraries, and things > that are actually doing algorithmic heavy lifting. It's quite possible > we don't want the same solution for all. And I'm not sure the wrappers all need to be written the same way, either. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Michael D. <md...@st...> - 2012-12-03 19:59:16
|
On 12/03/2012 01:12 PM, Chris Barker - NOAA Federal wrote: > This argues against making the Cython source code a part of the matplotlib codebase. > > huh? are you suggesting that we use Cython to generate the glue, then > hand-maintain that glue? I think that is a really, rally bad idea -- > generated code is ugly and hard to maintain, it is not designed to be > human-readable, and we wouldn't get the advantages of bug-fixes > further development in Cython. > > So -- if you use Cython, you want to keep using, and theat means the > Cython source IS the source. I agree that it's a good idea to ship the > generated code as well, so that no one that is not touching the Cython > has to generate. Other than the slight mess from generated files > showing up in diffs, etc, this really works just fine. I agree with this approach. > > Any reason MPL couldn't continue with EXACTLY the same approach now > used with C_XX -- it generates code as well, yes? No -- PyCXX is just C++. Its killer feature is that it provides a fairly thin layer around the Python C/API that does implicit reference counting through the use of C++ constructors and destructors. I actually think it's a really elegant approach to the problem. The downside we're running into is that it's barely maintained, so using vanilla upstream as provided by packagers is not viable. An alternative to all of this discussion is to fork PyCXX and release as needed. The maintenance required is primarily when new versions of Python are released, so it wouldn't necessarily be a huge undertaking. However, I know some are reluctant to use a relatively unused tool. > > Michael Droettboom wrote: > >> For the PNG extension specifically, it was creating callbacks that can >> be called from C and the setjmp magic that libpng requires. I think >> it's possible to do it, but I was surprised at how non-obvious those >> pieces of Cython were. I was really hoping by creating this experiment >> that a Cython expert would step up and show the way ;) > Did you not get the support you expected from the cython list? Anyway, > there's no reason you can't keep stuff in C that's easier in C (or did > C_XX make this easy?). The support has been adequate, but the solutions aren't always an improvement over raw Python/C API (not just in terms of lines of code but in terms of the number of layers of abstraction and "magic" between the coder and what actually happens). > I think making basic callbacks is actually > pretty straightforward, but In don't know about the setjmp magic (I > have no idea hat that means!). It turned out to be not terrible once I figured out the correct incantation. > >> The Agg backend has more C++-specific challenges, particularly >> instantiating very complex template expressions -- > I'm guessing you'd do the complex template stuff in C++ -- and let > Cython see a more traditional static API. Agreed -- I'm really only considering replacing the glue code provided by PyCXX, not the whole thing. matplotlib's C/C++ code has been around for a while and has been fairly vetted at this point, so I don't think a wholesale rewrite makes sense. > >> but some of that complexity could be reduced by using Numpy arrays in place of the >> image buffer types that each of them contain > OR Cython arrays and/or memoryviews -- this is indeed a real strength of Cython. Sure, but when we return to Python, they should be Numpy arrays which have more methods etc. -- or am I missing something? >> The Cython version isn't that much shorter than the C++ version. > I think some things make sense to keep in C++, though I do see a fair > bit of calls (in the C++) to the python API -- I'm surprised there > isn't much code advantage, but anyway, the goal is more robust/easier > to maintain, which may correlate with code-size, but not completely. > >> These declarations aren't exact matches to what one would find in the header file(s) >because Cython doesn't support exact-width data types etc. > It does support the C99 fixed-width integer types: > > from libc.stdint cimport int16_t, int32_t, > > Or are you talking about something else? The problem is that Cython can't actually read the C header, so there are types in libpng, for example, that we don't actually know the size of. They are different on different platforms. In C, you just include the header. In Cython, I'd have to determine the size of the types in a pre-compilation step, or manually determine their sizes and hard code them for the platforms we care about. > >> I'm not sure why some of the Python/C API calls I needed were not defined in Cython's include wrappers. > I suspect that's an oversight -- for the most part, stuff has been > added as it's needed. > > One other note -- from a quick glance at your Cython code, it looks > like you did almost everything is Cython-that-will-compile-to-pure-C > -- i.e. a lot of calls to the CPython API. But the whole point of > Cython is that it makes those calls for you. So you can do type > checking, and switching on types, and calling np.asarray(), etc, etc, > etc, in Python, without calling the CPython api yourself. I know > nothing of the PNG API, and am pretty week on the CPython API (and C > for that matter), but I it's likely that the Cython code you've > written could be much simplified. It would at least make this a more fair comparison to have the Cython code as Cythonic as possible. However, I couldn't find any ways around using these particular APIs -- other than the Numpy stuff which probably does have a more elegant solution in the form of Cython arrays and memory views. > > >> Once things compiled, due to my own mistake, calling the function segfaulted. Debugging >> that segfault in gdb required, again, wading through the generated code. Using gdb on >> hand-written code is *much* nicer. > for sure -- there is a plug-in/add-on/something for using gdb on > Cython code -- I haven't used it but I imagine it would help. Ah. I wasn't aware of that. Thanks for pointing that out. I have the CPython plug-in for gdb and it's great. > > Ian Thomas wrote: >> I have never used Cython, but to me the code looks like an inelegant combination of >> Python,C/C++ and some Cython-specific stuff. > well, yes, it is that! > >> I can see the advantage of this approach for small sections of code, but I have strong > reservations about using it for complicated modules that have extensive use of >> templated code and/or Standard Template Library collections (mpl has examples of >> both of these). > So far, I've found that Cython is good for: > - The simple stuff -- basic loops through numpy arrays, etc. > - wrapping/calling more complex C or C++ > -- essentially handling the reference counting and python type > packing/unpacking of python types. > > So we find we do write some shim code in C++ to make the access to the > core libraries Cython-friendly. We haven't dealt with complex > templating, etc, but I'd guess if we did I'd keep that in C++. And > since the resulting actual glue code is pretty simple, it makes the > debugging easier. > >> Maybe rather than asking "if we switched to using Cython, would more participate", I >> should be asking "among those that can participate in removing the PyCXX >> dependency, what is the preferred approach?" > I don't know that we need a one-sieze fits all approach -- perhaps > some bits make the most sense to move to plain old C/C++, and some to > Cython, either because of the nature of the code itself, or because of > the experience/preference of the person that takes ownership of a > particular problem. > True. We do have two categories of stuff using PyCXX in matplotlib: things that (primarily) wrap third-party C/C++ libraries, and things that are actually doing algorithmic heavy lifting. It's quite possible we don't want the same solution for all. Cheers, Mike |
From: Chris B. - N. F. <chr...@no...> - 2012-12-03 18:13:09
|
On Sat, Dec 1, 2012 at 6:44 AM, Michiel de Hoon > > Since the Python/C glue code is modified only very rarely, there may not be a need for regenerating the Python/C glue code by developers or users from a Cython source code. True. > In addition, it is much easier to maintain the Python/C glue code than to write it from scratch. Once you have the Python/C glue code, it's relatively straightforward to modify it by looking at the existing Python/C glue code. > not so true -- getting reference counting right, etc is difficult -- I suppose once the glue code is robust, and all you are changing is a bit of API to the C, maybe.... > > This argues against making the Cython source code a part of the matplotlib codebase. > huh? are you suggesting that we use Cython to generate the glue, then hand-maintain that glue? I think that is a really, rally bad idea -- generated code is ugly and hard to maintain, it is not designed to be human-readable, and we wouldn't get the advantages of bug-fixes further development in Cython. So -- if you use Cython, you want to keep using, and theat means the Cython source IS the source. I agree that it's a good idea to ship the generated code as well, so that no one that is not touching the Cython has to generate. Other than the slight mess from generated files showing up in diffs, etc, this really works just fine. Any reason MPL couldn't continue with EXACTLY the same approach now used with C_XX -- it generates code as well, yes? Michael Droettboom wrote: > For the PNG extension specifically, it was creating callbacks that can > be called from C and the setjmp magic that libpng requires. I think > it's possible to do it, but I was surprised at how non-obvious those > pieces of Cython were. I was really hoping by creating this experiment > that a Cython expert would step up and show the way ;) Did you not get the support you expected from the cython list? Anyway, there's no reason you can't keep stuff in C that's easier in C (or did C_XX make this easy?). I think making basic callbacks is actually pretty straightforward, but In don't know about the setjmp magic (I have no idea hat that means!). > The Agg backend has more C++-specific challenges, particularly > instantiating very complex template expressions -- I'm guessing you'd do the complex template stuff in C++ -- and let Cython see a more traditional static API. > but some of that complexity could be reduced by using Numpy arrays in place of the > image buffer types that each of them contain OR Cython arrays and/or memoryviews -- this is indeed a real strength of Cython. > The Cython version isn't that much shorter than the C++ version. I think some things make sense to keep in C++, though I do see a fair bit of calls (in the C++) to the python API -- I'm surprised there isn't much code advantage, but anyway, the goal is more robust/easier to maintain, which may correlate with code-size, but not completely. > These declarations aren't exact matches to what one would find in the header file(s) >because Cython doesn't support exact-width data types etc. It does support the C99 fixed-width integer types: from libc.stdint cimport int16_t, int32_t, Or are you talking about something else? > I'm not sure why some of the Python/C API calls I needed were not defined in Cython's include wrappers. I suspect that's an oversight -- for the most part, stuff has been added as it's needed. One other note -- from a quick glance at your Cython code, it looks like you did almost everything is Cython-that-will-compile-to-pure-C -- i.e. a lot of calls to the CPython API. But the whole point of Cython is that it makes those calls for you. So you can do type checking, and switching on types, and calling np.asarray(), etc, etc, etc, in Python, without calling the CPython api yourself. I know nothing of the PNG API, and am pretty week on the CPython API (and C for that matter), but I it's likely that the Cython code you've written could be much simplified. > Once things compiled, due to my own mistake, calling the function segfaulted. Debugging > that segfault in gdb required, again, wading through the generated code. Using gdb on > hand-written code is *much* nicer. for sure -- there is a plug-in/add-on/something for using gdb on Cython code -- I haven't used it but I imagine it would help. Ian Thomas wrote: > I have never used Cython, but to me the code looks like an inelegant combination of > Python,C/C++ and some Cython-specific stuff. well, yes, it is that! > I can see the advantage of this approach for small sections of code, but I have strong > reservations about using it for complicated modules that have extensive use of > templated code and/or Standard Template Library collections (mpl has examples of > both of these). So far, I've found that Cython is good for: - The simple stuff -- basic loops through numpy arrays, etc. - wrapping/calling more complex C or C++ -- essentially handling the reference counting and python type packing/unpacking of python types. So we find we do write some shim code in C++ to make the access to the core libraries Cython-friendly. We haven't dealt with complex templating, etc, but I'd guess if we did I'd keep that in C++. And since the resulting actual glue code is pretty simple, it makes the debugging easier. > Maybe rather than asking "if we switched to using Cython, would more participate", I > should be asking "among those that can participate in removing the PyCXX > dependency, what is the preferred approach?" I don't know that we need a one-sieze fits all approach -- perhaps some bits make the most sense to move to plain old C/C++, and some to Cython, either because of the nature of the code itself, or because of the experience/preference of the person that takes ownership of a particular problem. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Michael D. <md...@st...> - 2012-12-03 14:54:53
|
On 12/03/2012 04:07 AM, Ian Thomas wrote: > I vote for using the raw Python/C API. I've written a couple of PyCXX > extensions and whilst it is mostly convenient, PyCXX doesn't support > the use of numpy arrays so for them you have to use the Python/C API. > This means dealing with the reference counting yourself for numpy > arrays; extending this to do the reference counting for all python > objects is not onerous. Dealing with object lifetimes is > bread-and-butter work for C/C++ developers. That matches my experience quite well. > I have never used Cython, but to me the code looks like an inelegant > combination of Python, C/C++ and some Cython-specific stuff. I can > see the advantage of this approach for small sections of code, but I > have strong reservations about using it for complicated modules that > have extensive use of templated code and/or Standard Template Library > collections (mpl has examples of both of these). Even for C libraries like libpng, which requires use of C function callbacks for some things, Cython is more convoluted, particularly when things go wrong and require debugging. (Running gdb over generated Cython code is not fun!) And in my view, writing code like that requires a pretty deep understanding of the Python/C API, C itself, and the rather complex transformations that Cython performs. Writing directly to the Python/C API only requires knowledge of the first two. And there's a large body of books/tutorials/debuggers/tools for C that don't really have equivalents for Cython. > > I agree that Cython opens us up to a larger body of contributors, but > I don't think that this is necessarily a good thing. I think this > really means opens us up to a larger body of Python/Cython > contributors, and is a view expressed from the Python side of the > fence and has the wrong emphasis. I am primarily a C++ developer is a > sea of Python developers, and rather than encourage other Python > contributors to dip their toes into C/C++ via Cython I think we should > be encouraging C/C++ contributors to do what they do best. We only > need a few C/C++ developers if we allow them to use their skills in > their preferred way, and they are used to interfacing to legacy APIs > and dealing with object lifetimes. I think Cython is well suited to writing new algorithmic code to speed up hot spots in Python code. I don't think it's as well suited as glue between C and Python -- that was not a main goal of the original Pyrex project, IIRC. It feels kind of tacked on and not a very good fit to the problem. Most of the work to remove PyCXX use in matplotlib is either wrapping third-party libraries (where Cython doesn't really shine), or wrapping C/C++ code in our own tree that's already well-tested and vetted, and I wouldn't propose rewriting that in Cython. I'm only really considering rewriting the Python-to-C interface layer. > > OK, cards on the table. If we wanted to switch all of our PyCXX > modules to use the raw Python/C API, I would happily take on some of > the burden for making the changes and ongoing maintenance of such > modules. Particularly if, in return, I get some help with my > sometimes substandard Python! If we go down the Cython route I > couldn't make this offer; would our many Cython advocates take on the > responsibility of changing and maintaining my C++ code in this scenario? That's a good way to look at this. I was definitely hoping that moving to Cython might open us up to more developers, but at the end of the day, the chosen tool should be the one preferred by those doing the work. Maybe rather than asking "if we switched to using Cython, would more participate", I should be asking "among those that can participate in removing the PyCXX dependency, what is the preferred approach?" Cheers, Mike |
From: Ian T. <ian...@gm...> - 2012-12-03 09:07:13
|
I vote for using the raw Python/C API. I've written a couple of PyCXX extensions and whilst it is mostly convenient, PyCXX doesn't support the use of numpy arrays so for them you have to use the Python/C API. This means dealing with the reference counting yourself for numpy arrays; extending this to do the reference counting for all python objects is not onerous. Dealing with object lifetimes is bread-and-butter work for C/C++ developers. I have never used Cython, but to me the code looks like an inelegant combination of Python, C/C++ and some Cython-specific stuff. I can see the advantage of this approach for small sections of code, but I have strong reservations about using it for complicated modules that have extensive use of templated code and/or Standard Template Library collections (mpl has examples of both of these). I agree that Cython opens us up to a larger body of contributors, but I don't think that this is necessarily a good thing. I think this really means opens us up to a larger body of Python/Cython contributors, and is a view expressed from the Python side of the fence and has the wrong emphasis. I am primarily a C++ developer is a sea of Python developers, and rather than encourage other Python contributors to dip their toes into C/C++ via Cython I think we should be encouraging C/C++ contributors to do what they do best. We only need a few C/C++ developers if we allow them to use their skills in their preferred way, and they are used to interfacing to legacy APIs and dealing with object lifetimes. OK, cards on the table. If we wanted to switch all of our PyCXX modules to use the raw Python/C API, I would happily take on some of the burden for making the changes and ongoing maintenance of such modules. Particularly if, in return, I get some help with my sometimes substandard Python! If we go down the Cython route I couldn't make this offer; would our many Cython advocates take on the responsibility of changing and maintaining my C++ code in this scenario? Ian Thomas |
From: Damon M. <dam...@gm...> - 2012-12-03 02:42:16
|
On Sun, Dec 2, 2012 at 8:06 PM, Michael Droettboom <md...@st...> wrote: > I've pushed a fix to v1.2.x and master for this new problem > (35ee2184111fb8f80027869d8ee309c7f4e5a467). Unfortunately, another rebase > of your branches is in order in order to get this fix. Still failing: https://travis-ci.org/matplotlib/matplotlib/jobs/3469141 > > Mike > > > On 12/02/2012 12:23 PM, Thomas Kluyver wrote: > > On 2 December 2012 17:02, Damon McDougall <dam...@gm...> wrote: >> >> > Still failing even with the workaround. Here's proof: >> > https://github.com/matplotlib/matplotlib/pull/1549 >> >> And looks like Thomas reported an issue too: >> https://github.com/matplotlib/matplotlib/issues/1548 > > > This is a different problem, though (unless it's a really bizarre symptom of > the other problem). Now it's an error in compiling matplotlib. > > Thomas > > -- Damon McDougall http://www.damon-is-a-geek.com Institute for Computational Engineering Sciences 201 E. 24th St. Stop C0200 The University of Texas at Austin Austin, TX 78712-1229 |