You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(12) |
Sep
(12) |
Oct
(56) |
Nov
(65) |
Dec
(37) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(59) |
Feb
(78) |
Mar
(153) |
Apr
(205) |
May
(184) |
Jun
(123) |
Jul
(171) |
Aug
(156) |
Sep
(190) |
Oct
(120) |
Nov
(154) |
Dec
(223) |
2005 |
Jan
(184) |
Feb
(267) |
Mar
(214) |
Apr
(286) |
May
(320) |
Jun
(299) |
Jul
(348) |
Aug
(283) |
Sep
(355) |
Oct
(293) |
Nov
(232) |
Dec
(203) |
2006 |
Jan
(352) |
Feb
(358) |
Mar
(403) |
Apr
(313) |
May
(165) |
Jun
(281) |
Jul
(316) |
Aug
(228) |
Sep
(279) |
Oct
(243) |
Nov
(315) |
Dec
(345) |
2007 |
Jan
(260) |
Feb
(323) |
Mar
(340) |
Apr
(319) |
May
(290) |
Jun
(296) |
Jul
(221) |
Aug
(292) |
Sep
(242) |
Oct
(248) |
Nov
(242) |
Dec
(332) |
2008 |
Jan
(312) |
Feb
(359) |
Mar
(454) |
Apr
(287) |
May
(340) |
Jun
(450) |
Jul
(403) |
Aug
(324) |
Sep
(349) |
Oct
(385) |
Nov
(363) |
Dec
(437) |
2009 |
Jan
(500) |
Feb
(301) |
Mar
(409) |
Apr
(486) |
May
(545) |
Jun
(391) |
Jul
(518) |
Aug
(497) |
Sep
(492) |
Oct
(429) |
Nov
(357) |
Dec
(310) |
2010 |
Jan
(371) |
Feb
(657) |
Mar
(519) |
Apr
(432) |
May
(312) |
Jun
(416) |
Jul
(477) |
Aug
(386) |
Sep
(419) |
Oct
(435) |
Nov
(320) |
Dec
(202) |
2011 |
Jan
(321) |
Feb
(413) |
Mar
(299) |
Apr
(215) |
May
(284) |
Jun
(203) |
Jul
(207) |
Aug
(314) |
Sep
(321) |
Oct
(259) |
Nov
(347) |
Dec
(209) |
2012 |
Jan
(322) |
Feb
(414) |
Mar
(377) |
Apr
(179) |
May
(173) |
Jun
(234) |
Jul
(295) |
Aug
(239) |
Sep
(276) |
Oct
(355) |
Nov
(144) |
Dec
(108) |
2013 |
Jan
(170) |
Feb
(89) |
Mar
(204) |
Apr
(133) |
May
(142) |
Jun
(89) |
Jul
(160) |
Aug
(180) |
Sep
(69) |
Oct
(136) |
Nov
(83) |
Dec
(32) |
2014 |
Jan
(71) |
Feb
(90) |
Mar
(161) |
Apr
(117) |
May
(78) |
Jun
(94) |
Jul
(60) |
Aug
(83) |
Sep
(102) |
Oct
(132) |
Nov
(154) |
Dec
(96) |
2015 |
Jan
(45) |
Feb
(138) |
Mar
(176) |
Apr
(132) |
May
(119) |
Jun
(124) |
Jul
(77) |
Aug
(31) |
Sep
(34) |
Oct
(22) |
Nov
(23) |
Dec
(9) |
2016 |
Jan
(26) |
Feb
(17) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(8) |
Jul
(6) |
Aug
(5) |
Sep
(9) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(5) |
Feb
(7) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(5) |
2
(13) |
3
(1) |
4
(4) |
5
(10) |
6
(13) |
7
(14) |
8
(3) |
9
(10) |
10
(3) |
11
|
12
(2) |
13
(8) |
14
(4) |
15
(4) |
16
(12) |
17
|
18
|
19
(7) |
20
(3) |
21
(1) |
22
(1) |
23
(28) |
24
(2) |
25
(3) |
26
(4) |
27
(8) |
28
(4) |
29
(4) |
30
(6) |
31
(3) |
From: Benjamin R. <ben...@ou...> - 2013-08-09 16:14:15
|
On Fri, Aug 9, 2013 at 11:15 AM, Martin Mokrejs <mmo...@fo... > wrote: > Hi Ben, > thank your for your comments. OK, here are revised patches. I see a hot > spot > in artist.py where the getattr() calls are too expensive. Actually, those > under > the callable() path. > > Ah, yes... one of the biggest warts in our codebase. Does get_aliases() really get called that often? If so, I doubt changes with regards to startwith() is the best use of our time. I would imagine that a refactor that caches possible aliases. Hell, I would just as soon like to see the entire aliases framework ripped out and redone more correctly in the first place. As for the other startswith() changes, there are some subtle differences in the change that has to be considered. First, is there a guarantee that the string being indexed is not empty? startswith() would handle that correctly, while indexing would throw an exception (and setting up code to try...catch those exceptions would reduce readability and probably reduce performance back to where we started). I think that the *biggest* improvement we are going to get is from your patch to figure.py because it touches on some very deep code that is executed very frequently, and we were doing in probably the most inefficient manner. Again, I really stress the importance of setting up a github account and submitting a PR. Once you do that, you are all set to go for contributing to many other great projects (numpy, scipy, ipython, etc.). Cheers! Ben Root |
From: Martin M. <mmo...@fo...> - 2013-08-09 15:16:30
|
Hi Ben, thank your for your comments. OK, here are revised patches. I see a hot spot in artist.py where the getattr() calls are too expensive. Actually, those under the callable() path. 793 class ArtistInspector: 817 def get_aliases(self): 818 """ 819 Get a dict mapping *fullname* -> *alias* for each *alias* in 820 the :class:`~matplotlib.artist.ArtistInspector`. 821 822 Eg., for lines:: 823 824 {'markerfacecolor': 'mfc', 825 'linewidth' : 'lw', 826 } 827 828 """ 829 names = [name for name in dir(self.o) if 830 (name[:4] in ['set_', 'get_']) 831 and callable(getattr(self.o, name))] 832 aliases = {} 833 for name in names: 834 func = getattr(self.o, name) 835 if not self.is_alias(func): 836 continue 837 docstring = func.__doc__ 838 fullname = docstring[10:] 839 aliases.setdefault(fullname[4:], {})[name[4:]] = None 840 return aliases Another hot spot is setp() artist.py, actually its get_aliases() on line 817, which again leads to getattr() and callable(). The problem is there are millions of their calls. So, once again, my all my patches attached (figure.py.patch artist.py.patch have changed). Martin Benjamin Root wrote: > > On Fri, Aug 9, 2013 at 9:04 AM, Martin Mokrejs <mmo...@fo... <mailto:mmo...@fo...>> wrote: > > Hi Phil, > > Phil Elson wrote: > > Hi Martin, > > > > Thanks for this - we are really interested in speeding up the scatter and barchart plotting with large data sets. In fact, we've done some work (https://github.com/matplotlib/matplotlib/pull/2156) recently to make the situation better. > > > > I'd really like to review these changes (against matplotlib master), and the best possible solution to doing this is if you were to submit a pull request. If the changes you have made are logically seperable, then I'd encourage you to make a few PRs, but otherwise, a single PR with all of these changes would be great. > > I went through the changes there and they just cope with other pieces of matplotlib. > My changes are general python improvements moving away from str.startswith() > and using generators instead of for loops. Just apply the patches yourself and see. > ;) > > > > > Would you mind turning these patches into PR(s)? (https://github.com/matplotlib/matplotlib/compare/) > > Um, I don't know what to do on that page, sorry. I don't see how to upload my patch file or patched file > to be compared with master. :( > > > > > Thanks! > > I am sorry but I just don't have time to fiddle with github. It is just awkward. I even failed to download > diffs of the changes from https://github.com/matplotlib/matplotlib/pull/2156/commits. > > I rather continue studying runsnake output. ;-) > > Martin > > > A snippet from one of you patches: > > dsu = [] > > - for a in self.patches: > - dsu.append( (a.get_zorder(), a, a.draw, [renderer])) > + [ dsu.append( (x.get_zorder(), x, x.draw, [renderer])) for x in self.patches ] > > Yes, we certainly should use list-comprehensions here, but you are using it incorrectly. It should be: > > dsu = [(x.get_zorder(), x, x.draw, [renderer])) for x in self.patches ] > > And then, further down, do the following: > > dsu.extend((x.get_zorder(), x, x.draw, [renderer])) for x in self.lines) > > Note the generator form of the comprehension as opposed to the list comprehension form. List comprehensions should *always* be assigned to something. List comprehensions should only be for replacing the for-append idiom in python. > > Thank you though for pointing out parts of the code that can benefit from revisions. I certainly hope you can get this put together as a pull request on github so we can work to make this patch better! > > Ben Root > > P.S. - I <3 runsnakerun! |
From: Benjamin R. <ben...@ou...> - 2013-08-09 13:46:10
|
On Fri, Aug 9, 2013 at 9:04 AM, Martin Mokrejs <mmo...@fo...>wrote: > Hi Phil, > > Phil Elson wrote: > > Hi Martin, > > > > Thanks for this - we are really interested in speeding up the scatter > and barchart plotting with large data sets. In fact, we've done some work ( > https://github.com/matplotlib/matplotlib/pull/2156) recently to make the > situation better. > > > > I'd really like to review these changes (against matplotlib master), and > the best possible solution to doing this is if you were to submit a pull > request. If the changes you have made are logically seperable, then I'd > encourage you to make a few PRs, but otherwise, a single PR with all of > these changes would be great. > > I went through the changes there and they just cope with other pieces of > matplotlib. > My changes are general python improvements moving away from > str.startswith() > and using generators instead of for loops. Just apply the patches yourself > and see. > ;) > > > > > Would you mind turning these patches into PR(s)? ( > https://github.com/matplotlib/matplotlib/compare/) > > Um, I don't know what to do on that page, sorry. I don't see how to upload > my patch file or patched file > to be compared with master. :( > > > > > Thanks! > > I am sorry but I just don't have time to fiddle with github. It is just > awkward. I even failed to download > diffs of the changes from > https://github.com/matplotlib/matplotlib/pull/2156/commits. > > I rather continue studying runsnake output. ;-) > > Martin > > A snippet from one of you patches: dsu = [] - for a in self.patches: - dsu.append( (a.get_zorder(), a, a.draw, [renderer])) + [ dsu.append( (x.get_zorder(), x, x.draw, [renderer])) for x in self.patches ] Yes, we certainly should use list-comprehensions here, but you are using it incorrectly. It should be: dsu = [(x.get_zorder(), x, x.draw, [renderer])) for x in self.patches ] And then, further down, do the following: dsu.extend((x.get_zorder(), x, x.draw, [renderer])) for x in self.lines) Note the generator form of the comprehension as opposed to the list comprehension form. List comprehensions should *always* be assigned to something. List comprehensions should only be for replacing the for-append idiom in python. Thank you though for pointing out parts of the code that can benefit from revisions. I certainly hope you can get this put together as a pull request on github so we can work to make this patch better! Ben Root P.S. - I <3 runsnakerun! |
From: Benjamin R. <ben...@ou...> - 2013-08-09 13:35:54
|
On Fri, Aug 9, 2013 at 6:57 AM, SquirrelSeq <ral...@un...>wrote: > Hello everybody, > > I created 3D-Plots with 3D bars in matplotlib. The bars are colored > according to a colormap. > > Unfortunately, only vertical faces have the desired bright colors, whereas > the top sides of the bars are shaded darker to make it look more 3D. > > This makes the colors a lot more difficult to see, depending on the > perspective. > > What can I do in order to switch of shading or to add an ambivalent light > source? > > Best regards > SquirrelSeq > > <http://matplotlib.1069221.n5.nabble.com/file/n41766/overlap_heatmap.png> > > The one way to do it is to patch the source code in the following way. in mpl_toolkits/mplot3d/axes3d.py, at around line 2355, replace the line: sfacecolors = self._shade_colors(facecolors, normals) with sfacecolors = facecolors Could you file a github issue requesting a keyword argument to turn this on/off? Cheers! Ben Root |
From: Martin M. <mmo...@fo...> - 2013-08-09 13:05:29
|
Hi Phil, Phil Elson wrote: > Hi Martin, > > Thanks for this - we are really interested in speeding up the scatter and barchart plotting with large data sets. In fact, we've done some work (https://github.com/matplotlib/matplotlib/pull/2156) recently to make the situation better. > > I'd really like to review these changes (against matplotlib master), and the best possible solution to doing this is if you were to submit a pull request. If the changes you have made are logically seperable, then I'd encourage you to make a few PRs, but otherwise, a single PR with all of these changes would be great. I went through the changes there and they just cope with other pieces of matplotlib. My changes are general python improvements moving away from str.startswith() and using generators instead of for loops. Just apply the patches yourself and see. ;) > > Would you mind turning these patches into PR(s)? (https://github.com/matplotlib/matplotlib/compare/) Um, I don't know what to do on that page, sorry. I don't see how to upload my patch file or patched file to be compared with master. :( > > Thanks! I am sorry but I just don't have time to fiddle with github. It is just awkward. I even failed to download diffs of the changes from https://github.com/matplotlib/matplotlib/pull/2156/commits. I rather continue studying runsnake output. ;-) Martin > > Phil > > > On 9 August 2013 12:53, Martin Mokrejs <mmo...@fo... <mailto:mmo...@fo...>> wrote: > > Hi, > I am drawing some barcharts and scatter plot and the speed for rendering is awful once you have > 100 000 of dots. I ran python profiler which lead me to .startswith() calls and some for loops > which append do a list repeatedly. This parts could be still sped up I think but a first attempt > is here: > > > UNPATCHED 1.2.1 > > real 23m17.764s > user 13m25.880s > sys 3m37.180s > > > PATCHED: > > real 6m59.831s > user 5m18.000s > sys 1m40.360s > > > > The patches are simple and because I see elsewhere in the code list expansions I do not see any > problems with backwards compatibility (new new python language features are required). > > Hope this helps, > Martin > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... <mailto:Mat...@li...> > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > |
From: Phil E. <pel...@gm...> - 2013-08-09 12:25:07
|
Hi Martin, Thanks for this - we are really interested in speeding up the scatter and barchart plotting with large data sets. In fact, we've done some work ( https://github.com/matplotlib/matplotlib/pull/2156) recently to make the situation better. I'd really like to review these changes (against matplotlib master), and the best possible solution to doing this is if you were to submit a pull request. If the changes you have made are logically seperable, then I'd encourage you to make a few PRs, but otherwise, a single PR with all of these changes would be great. Would you mind turning these patches into PR(s)? ( https://github.com/matplotlib/matplotlib/compare/) Thanks! Phil On 9 August 2013 12:53, Martin Mokrejs <mmo...@fo...> wrote: > Hi, > I am drawing some barcharts and scatter plot and the speed for rendering > is awful once you have > 100 000 of dots. I ran python profiler which lead me to .startswith() > calls and some for loops > which append do a list repeatedly. This parts could be still sped up I > think but a first attempt > is here: > > > UNPATCHED 1.2.1 > > real 23m17.764s > user 13m25.880s > sys 3m37.180s > > > PATCHED: > > real 6m59.831s > user 5m18.000s > sys 1m40.360s > > > > The patches are simple and because I see elsewhere in the code list > expansions I do not see any > problems with backwards compatibility (new new python language features > are required). > > Hope this helps, > Martin > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > |
From: Martin M. <mmo...@fo...> - 2013-08-09 12:12:54
|
Hi, I am drawing some barcharts and scatter plot and the speed for rendering is awful once you have 100 000 of dots. I ran python profiler which lead me to .startswith() calls and some for loops which append do a list repeatedly. This parts could be still sped up I think but a first attempt is here: UNPATCHED 1.2.1 real 23m17.764s user 13m25.880s sys 3m37.180s PATCHED: real 6m59.831s user 5m18.000s sys 1m40.360s The patches are simple and because I see elsewhere in the code list expansions I do not see any problems with backwards compatibility (new new python language features are required). Hope this helps, Martin |
From: Martin M. <mmo...@fo...> - 2013-08-09 12:07:34
|
[re-sending with also the 3rd patch file, sorry] Hi, I am drawing some barcharts and scatter plot and the speed for rendering is awful once you have 100 000 of dots. I ran python profiler which lead me to .startswith() calls and some for loops which append do a list repeatedly. This parts could be still sped up I think but a first attempt is here: UNPATCHED 1.2.1 real 23m17.764s user 13m25.880s sys 3m37.180s PATCHED: real 6m59.831s user 5m18.000s sys 1m40.360s The patches are simple and because I see elsewhere in the code list expansions I do not see any problems with backwards compatibility (new new python language features are required). Hope this helps, Martin |
From: SquirrelSeq <ral...@un...> - 2013-08-09 10:57:55
|
Hello everybody, I created 3D-Plots with 3D bars in matplotlib. The bars are colored according to a colormap. Unfortunately, only vertical faces have the desired bright colors, whereas the top sides of the bars are shaded darker to make it look more 3D. This makes the colors a lot more difficult to see, depending on the perspective. What can I do in order to switch of shading or to add an ambivalent light source? Best regards SquirrelSeq <http://matplotlib.1069221.n5.nabble.com/file/n41766/overlap_heatmap.png> -- View this message in context: http://matplotlib.1069221.n5.nabble.com/Turn-off-shading-in-ax-bar3d-tp41766.html Sent from the matplotlib - users mailing list archive at Nabble.com. |
From: Fernando P. <fpe...@gm...> - 2013-08-09 01:38:16
|
Hi all, I am incredibly thrilled, on behalf of the amazing IPython Dev Team, to announce the official release of IPython 1.0 today, an effort nearly 12 years in the making. The previous version (0.13) was released on June 30, 2012, and in this development cycle we had: ~12 months of work. ~700 pull requests merged. ~600 issues closed (non-pull requests). contributions from ~150 authors. ~4000 commits. # A little context What does "1.0" mean for IPython? Obviously IPython has been a staple of the scientific Python community for years, and we've made every effort to make it a robust and production ready tool for a long time, so what exactly do we mean by tagging this particular release as 1.0? Basically, we feel that the core design of IPython, and the scope of the project, is where we want it to be. What we have today is what we consider a reasonably complete, design- and scope-wise, IPython 1.0: an architecture for interactive computing, that can drive kernels in a number of ways using a well-defined protocol, and rich and powerful clients that let users control those kernels effectively. Our different clients serve different needs, with the old workhorse of the terminal still being very useful, but much of our current development energy going into the Notebook, obviously. The Notebook enables interactive exploration to become Literate Computing, bridging the gaps from individual work to collaboration and publication, all with an open file format that is a direct record of the underlying communication protocol. There are obviously plenty of open issues (many of them very important) that need fixing, and large and ambitious new lines of development for the years to come. But the work of the last four years, since the summer of 2009 when Brian Granger was able to devote a summer (thanks to funding from the NiPy project - nipy.org) to refactoring the old IPython core code, finally opened up or infrastructure for real innovation. By disentangling what was a useful but impenetrable codebase, it became possible for us to start building a flexible, modern system for interactive computing that abstracted the old REPL model into a generic protocol that kernels could use to talk to clients. This led at first to the creation of the Qt console, and then to the Notebook and out-of-process terminal client. It also allowed us to (finally!) unify our parallel computing machinery with the rest of the interactive system, which Min Ragan-Kelley pulled off in a development tour de force that involved rewriting in a few weeks a huge and complex Twisted-based system. We are very happy with how the Notebook work has turned out, and it seems the entire community agrees with us, as the uptake has been phenomenal. Back from the very first "IPython 0.0.1" that I started in 2001: https://gist.github.com/fperez/1579699 there were already hints of tools like Mathematica: it was my everyday workhorse as a theoretical physicist and I found its Notebook environment invaluable. But as a grad student trying out "just an afternoon hack" (IPython was my very first Python program as I was learning the language), I didn't have the resources, skills or vision to attempt building an entire notebook system, and to be honest the tools of the day would have made that enterprise a miserable one. But those ideas were always driving our efforts, and as IPython started becoming a project with a team, we made multiple attempts to get a good Notebook built around IPython. Those interested can read an old blog post of mine with the history (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). The short story is that in 2011, on our sixth attempt, Brian was again able to devote a focused summer into using our client-server architecture and, with the stack of the modern web (Javascript, CSS, websockets, Tornado, ...), finally build a robust system for Literate Computing across programming languages. Today, thanks to the generous support and vision of Josh Greenberg at the Alfred P. Sloan Foundation, we are working very hard on building the notebook infrastructure, and this release contains major advances on that front. We have high hopes for what we'll do next; as a glimpse of the future that this enables, now there is a native Julia kernel that speaks to our clients, notebook included: https://github.com/JuliaLang/IJulia.jl. # Team I can't stress enough how impressed I am with the work people are doing in IPython, and what a privilege it is to work with colleagues like these. Brian Granger and Min Ragan-Kelley joined IPython around 2005, initially working on the parallel machinery, but since ~ 2009 they have become the heart of the project. Today Min is our top committer and knows our codebase better than anyone else, and I can't imagine better partners for an effort like this. And from regulars in our core team like Thomas Kluyver, Matthias Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan Frederic and Zach Sailer, in addition to the many more whose names are in our logs, we have a crazy amount of energy being poured into IPython. I hope we'll continue to harness it productively! The full list of contributors to this release can be seen here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html # Release highlights * nbconvert: this is the major piece of new functionality in this cycle, and was an explicit part of our roadmap (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert is now an IPython subcommand to convert notebooks into other formats such as HTML or LaTeX, but more importantly, it's a very flexible system that lets you write custom templates to generate new output with arbitrary control over the formatting and transformations that are applied to the input. We want to stress that despite the fact that a huge amount of work went into nbconvert, this should be considered a *tech preview* release. We've come to realize how complex this problem is, and while we'll make every effort to keep the high-level command-line syntax and APIs as stable as possible, it is quite likely that the internals will continue to evolve, possibly in backwards-incompatible ways. So if you start building services and libraries that make heavy use of the nbconvert internals, please be prepared for some turmoil in the months to come, and ping us on the dev list with questions or concerns. * Notebook improvements: there has been a ton of polish work in the notebook at many levels, though the file format remains unchanged from 0.13, so you shouldn't have any problems sharing notebooks with colleagues still using 0.13. - Autosave: probably the most oft-requested feature, the notebook server now autosaves your files! You can still hit Ctrl-S to force a manual save (which also creates a special 'checkpoint' you can come back to). - The notebook supports raw_input(), and thus also %debug. This was probably the main deficiency of the notebook as a client compared to the terminal/qtconsole, and it has been finally fixed. - Add %%html, %%svg, %%javascript, and %%latex cell magics for writing raw output in notebook cells. - Fix an issue parsing LaTeX in markdown cells, which required users to type \\\, instead of \\. -Images support width and height metadata, and thereby 2x scaling (retina support). - %%file has been renamed %%writefile (%%file) is deprecated. * The input transofrmation code has been updated and rationalized. This is a somewhat specialized part of IPython, but of importance to projects that build upon it for custom environments, like Sympy and Sage. Our full release notes are here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html and the gory details are here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html # Installation Installation links and instructions are at: http://ipython.org/install.html And IPython is also on PyPI: http://pypi.python.org/pypi/ipython # Requirements IPython 1.0 requires Python ≥ 2.6.5 or ≥ 3.2.1. It does not support Python 3.0, 3.1, or 2.5. # Acknowledgments Last but not least, we'd like to acknowledge the generous support of those who make it possible for us to spend our time working on IPython. In particular, the Alfred P. Sloan Foundation today lets us have a solid team working full-time on the project, and without the support of Enthought Inc at multiple points in our history, we wouldn't be where we are today. The full list of our support is here: http://ipython.org/index.html#support Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as usual! Fernando, on behalf of the IPython Dev Team. -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail |