dev-cpp-users Mailing List for Dev-C++
Open Source C & C++ IDE for Windows
Brought to you by:
claplace
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(15) |
Oct
(115) |
Nov
(154) |
Dec
(258) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(377) |
Feb
(260) |
Mar
(249) |
Apr
(188) |
May
(152) |
Jun
(150) |
Jul
(195) |
Aug
(202) |
Sep
(200) |
Oct
(286) |
Nov
(242) |
Dec
(165) |
2002 |
Jan
(245) |
Feb
(241) |
Mar
(239) |
Apr
(346) |
May
(406) |
Jun
(369) |
Jul
(418) |
Aug
(357) |
Sep
(362) |
Oct
(597) |
Nov
(455) |
Dec
(344) |
2003 |
Jan
(446) |
Feb
(397) |
Mar
(515) |
Apr
(524) |
May
(377) |
Jun
(387) |
Jul
(532) |
Aug
(364) |
Sep
(294) |
Oct
(352) |
Nov
(295) |
Dec
(327) |
2004 |
Jan
(416) |
Feb
(318) |
Mar
(324) |
Apr
(249) |
May
(259) |
Jun
(218) |
Jul
(212) |
Aug
(259) |
Sep
(158) |
Oct
(162) |
Nov
(214) |
Dec
(169) |
2005 |
Jan
(111) |
Feb
(165) |
Mar
(199) |
Apr
(147) |
May
(131) |
Jun
(163) |
Jul
(235) |
Aug
(136) |
Sep
(84) |
Oct
(88) |
Nov
(113) |
Dec
(100) |
2006 |
Jan
(85) |
Feb
(119) |
Mar
(33) |
Apr
(31) |
May
(56) |
Jun
(68) |
Jul
(18) |
Aug
(62) |
Sep
(33) |
Oct
(55) |
Nov
(19) |
Dec
(40) |
2007 |
Jan
(22) |
Feb
(49) |
Mar
(34) |
Apr
(51) |
May
(66) |
Jun
(43) |
Jul
(116) |
Aug
(57) |
Sep
(70) |
Oct
(69) |
Nov
(97) |
Dec
(86) |
2008 |
Jan
(32) |
Feb
(47) |
Mar
(106) |
Apr
(67) |
May
(28) |
Jun
(39) |
Jul
(31) |
Aug
(25) |
Sep
(18) |
Oct
(25) |
Nov
(5) |
Dec
(21) |
2009 |
Jan
(33) |
Feb
(27) |
Mar
(27) |
Apr
(22) |
May
(22) |
Jun
(10) |
Jul
(17) |
Aug
(9) |
Sep
(21) |
Oct
(13) |
Nov
(4) |
Dec
(11) |
2010 |
Jan
(10) |
Feb
(8) |
Mar
(4) |
Apr
(1) |
May
|
Jun
(2) |
Jul
|
Aug
(1) |
Sep
(8) |
Oct
(26) |
Nov
(9) |
Dec
(1) |
2011 |
Jan
(21) |
Feb
(16) |
Mar
(4) |
Apr
(19) |
May
(26) |
Jun
(9) |
Jul
(6) |
Aug
|
Sep
(4) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2012 |
Jan
(4) |
Feb
(7) |
Mar
(4) |
Apr
|
May
(1) |
Jun
(10) |
Jul
(1) |
Aug
(1) |
Sep
(18) |
Oct
(3) |
Nov
(1) |
Dec
(1) |
2013 |
Jan
(4) |
Feb
(2) |
Mar
(15) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(9) |
Dec
|
2014 |
Jan
(4) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(4) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
(4) |
2015 |
Jan
(2) |
Feb
(3) |
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
(2) |
Oct
(9) |
Nov
(35) |
Dec
(6) |
2016 |
Jan
(7) |
Feb
(10) |
Mar
(10) |
Apr
(9) |
May
(13) |
Jun
(9) |
Jul
(1) |
Aug
(3) |
Sep
(3) |
Oct
(1) |
Nov
(1) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(1) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(3) |
2
|
3
|
4
|
5
|
6
(1) |
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
(1) |
17
(1) |
18
(1) |
19
|
20
|
21
|
22
|
23
|
24
|
25
(1) |
26
(1) |
27
|
28
|
29
|
30
|
31
|
|
|
|
|
|
From: Per W. <pw...@ia...> - 2009-08-26 18:48:30
|
A #define expansion will not affect the execution time. Use of #define will be handled by the compiler during compile time. For the executable, it doesn't matter if you use a #define, or if you write the expansion directly into the source. It is almost impossible to discuss execution timings for different code expansions. It will depend on what compiler, what compiler options, what processor, what memory latency and bandwidth and how you combine the code with other code. If you want to, you can benchmark different constructs, but the applicability of such benchmarking will depend on how close your system is to the end-users, and remember that a close loop will be way faster than the same code run at random times because of the differences in cache coherency. /pwm On 25 Aug 2009, anoop sabir wrote: > Will the use of #define macros reduce the execution time? If yes, how? > > >From where can I have a comparative understanding of various code structure and function execution timings? |
From: anoop s. <ano...@re...> - 2009-08-25 07:30:54
|
Will the use of #define macros reduce the execution time? If yes, how? >From where can I have a comparative understanding of various code structure and function execution timings? |
From: Per W. <pw...@ia...> - 2009-08-17 10:19:42
|
No real problems with using volatile variables to synchronize multiple threads as long as the variable is small enough that it can be atomically read/written. If you have specific questions relating to the articles you have found, you must post a link to the articles. Volatile variables works well for polling operations. But there are other methods available to synchronize threads, where a thread may sleep until you send it an event. And there are signals, critical sections and mutexes that can be used too. There are often more than one way to skin a cat, and the developer has to select which method works best for each individual case. /pwm On Mon, 17 Aug 2009, Philip Bennefall wrote: > Hi folks, > > I've been using volatile for flag variables to signal that events have occured between threads, always making sure that only one thread writes to it at a time while another thread regularly checks for its value. Like this: > > volatile int global_flag=0; > > void func_1() > {// Do work. > // Do work. > > global_flag=1; > } > > void func2() // In another thread. > { > while(global_flag==0) > { > // Do other stuff if needed. > } > // Flag changed, now we can proceed. > } > > This has been working fine on my X86 machine running Windows Xp for months without a glitch, but I've recently read up on volatile and found some articles claiming that this is a poor method to accomplish said goal. All I am trying to achieve is a flag variable that is sure not to be read in the middle of a writing operation, but I'm not certain now that I'm going about it the right way. I use MinGw's heaviest optimization setting. Can anyone shed some light on this perhaps? Would I really need to use a synchronization object like a critical section, or is there in fact a simpler method? > > Thanks in advance. > > Regards > Philip Bennefall |
From: Philip B. <ph...@bl...> - 2009-08-16 23:23:53
|
Hi folks, I've been using volatile for flag variables to signal that events have occured between threads, always making sure that only one thread writes to it at a time while another thread regularly checks for its value. Like this: volatile int global_flag=0; void func_1() {// Do work. // Do work. global_flag=1; } void func2() // In another thread. { while(global_flag==0) { // Do other stuff if needed. } // Flag changed, now we can proceed. } This has been working fine on my X86 machine running Windows Xp for months without a glitch, but I've recently read up on volatile and found some articles claiming that this is a poor method to accomplish said goal. All I am trying to achieve is a flag variable that is sure not to be read in the middle of a writing operation, but I'm not certain now that I'm going about it the right way. I use MinGw's heaviest optimization setting. Can anyone shed some light on this perhaps? Would I really need to use a synchronization object like a critical section, or is there in fact a simpler method? Thanks in advance. Regards Philip Bennefall |
From: Julian C. O. <jul...@sy...> - 2009-08-06 20:35:26
|
Hi! I recently installed Dev-C++, but when I try to use it this message appears: http://img12.imageshack.us/img12/8379/sanstitresgn.jpg. What does it mean? What do I have to do to correct it? I am sorry if it is something that I should know, but I am not an expert. I just want to learn to make some simple programs. Thank you! |
From: Frederico M. <the...@ho...> - 2009-08-01 17:11:25
|
Hello, Thank you for all your replies. First of, please forgive me if any concept is erroneus, or I mispelled anything. I am a naturally Portuguese-speaker and I do not have an indepth knowledge of English. I have only gotten to High-school level, so please bear with me :)... PWM, your reply was exactly within the concepts I was envisioning. I completely agree with your first paragraph: It is, or should be, more eficient to request memory and be able to reuse it, than 'free'ing it for the kernel. Also, your point in having 'before-hand' an aproximation of the memory needed is also regarded: The object I was trying to define in the first mail, should keep a permanent record( file ) with memory usage so that in the next [program]execution, the object shall reserve enough memory to serve at least 80%( of 80-20 rule) of total memory requirements. Your paragraph on the instantaneous allocation of memory in chunks, by the SO, that is only commited when written to, raises a question : Does HeapAlloc, with HEAP_ZERO_MEMORY obey to this rule? The memory is zero'ed, so, it should suffer the same penalties, right? The paragraph disserting implementation of a 'internal-memory manager' is exactly my idea. The only diference, from those custom heap allocators, as I know them, is that mine will implement some sort of profilling; It will, by managing memory keep a record, to inform me of its usage, and possibly help me trace down those bottlenecks, as thread concurrency in multi-processor systems. But this is getting ahead of itself. The main problem, to which I have got no solution thus yet, is to obviously figure out when could I release a allocated memory, as you have expressed so succintly. This led me down another development path, one much more tiresome : To implement such features as a per-class implementation. Each class will have its own manager, that requests memory from a central memory manager 'dispatcher' of sorts - keeping the same features as above. This should solve the situation, because each object will raise such event, upon releasing memory, that the central manager will know that such memory is now available for reabsortion. This will waste a few CPU cycles, running the same code of code again and again... This is a more robust, and elegant - giving me semantic information of data present at the released location -, solution, but it may prove itself less efficient - in CPU cycles, - comparing to the impact of reserving memory through kernel services, as I believe that Windows Kernel will always allocate more virtual memory than its asked to. Although I have no data to comprove my point, and microsoft have not replied to my request to see such source code ( little bit of humor - have not really asked ... ) Could you point me down some libraries to do such analysis on sample codes? I'll try to create the above mentioned algorithms, in a simplified manner, and subject them to a few groups of tests, simulating real activity. Hopefully, I'll draw illations that will help me decide and, possibly, augment such models to a point where its superiority will be obvious. Thank you in advance for the time spent on this subject, Frederico Marques Frederico Marques > Date: Sat, 1 Aug 2009 14:14:56 +0200 > From: pw...@ia... > To: the...@ho... > CC: dev...@li... > Subject: Re: [Dev-C++] [Win32] Teoretical question about memory managing > > It is more efficient to have a local cache and reuse released memory than > to send it back to Windows and directly reclaim it again. > > It is normally not good to preallocate all memory in advance unless the > program is known to always consume almost the same amount of memory. A big > preallocation will just punish other applications, while potentially > slowing down the startup of your program while Windows swaps out data to > be able to fulfill your allocations. > > Another thing is that the OS can often instantly allocate large chunks of > memory without actually committing any RAM - your application does just > get an address range, but no mapping of memory happens until your program > tries to access the individual memory pages. On first access of each page, > one memory page gets allocated into the address range. That means that to > actually preallocate data, you also need to write to the memory - for > example by clearing it to zero. And allocating and clearing a very large > block of memory can get your application to seemingly hang for quite some > time, while Windows finds something to throw out to be able to find unused > RAM to map into your application space. > > Many applications that needs large numbers of allocations/releases are > doing incremental allocations, and grouping the allocated objects into > different block sizes. When a block is released, it is sent to a list of > blocks of that size. When needing a block, the program checks if this list > has any available block. If not, the program may allocate an array of 10 > or 100 blocks using a single Windows allocation, and then splits the large > block into the 10 or 100 smaller blocks and addes them to the free list. > > For just tracing the memory blocks your program uses, there are several > free libraries available. They can, on exit of the application, tell you > of any memory leaks. But such libraries don't work well with an > application that performs allocations of large blocks and then splits > these large blocks into smaller blocks. The library will then just see all > these large blocks, and consider them all to be memory leaks. The reason > is that if you split one large block into many small, it is very hard for > your program to later figure out if all the small blocks has been released > so that you may unlink them from the free list and release this big block. > > In the end, it is very hard to talk about a general solution that is > applicable in all cases. What is best will depend exactly on the access > patterns of the program. Quotient between small and large allocations. > Total number of allocations. Amount of allocations in relation to > releases. If allocations and releases are randomly distributed during the > runtime or if the allocation patterns looks like waves where the program > makes a lot of allocations and then a lot of releases before starting the > next wave. > > /pwm > > On Fri, 31 Jul 2009, Frederico Marques wrote: > > > > > Hello, > > > > Again, I seek your advice on a personal project : > > I have searched for information related to the impact that managing, or more accurately not managing memory, will have on processes in win32. > > Recently I have stumbled upon a paper disserting the effects of the paging process on windows, and the cost of requesting more resources than those currently addressed by the front-end virtual memory manager. > > As I envision, the final code will use a great deal of memory, and actually, it will not always be coded by me, and as such, I cannot be certain that the allocated resourced will ever be freed. > > I am quite the control freak, so I'm trying to implement one of two designs : > > > > a ) The software will contain one to several objects that allocate memory, at program start, based on the latest data retrieved from executions with high volume processing, or by other words, I'll subject my code to stress-tests, and register memory usage, for reserving it, on next program restart. I'll, obviously, be managing the memory, providing by one defined interface, my own allocation/freeing routines that will manage the already reserved memory(from the OS). > > b ) All objects within the software will, somehow ( Haven't yet figured this out ) manage its own memory, having this way some information about the memory, besides its start address and the span of it. > > > > > > To sum up, and finally presenting my question : which design would be better, and which would provide more information about the memory usage? > > The above points are, obviously, based on my belief that the system( OS, L2,L1 caches, processor, ram ) will handle better paging out/in virtual memory, than actually reserving it at a latter state, especially, at different points in program logic, since the locality is not observed, on, for instance, linked-list implementations, and processor-cycles will be 'wasted' in finding available memory in kernel/c-runtime area. > > > > Please do correct me if my assumptions are not correct, and, please advice me if you have measurable results related to such implementations... This is a pet project, and although I am pursuing it by sheer fun, I believe it can be brought to good use. > > > > > > Sincerely > > Frederico Marques > > > > _________________________________________________________________ > > Com o Windows Live, você pode organizar, editar e compartilhar suas fotos. > > http://www.microsoft.com/brasil/windows/windowslive/products/photo-gallery-edit.aspx > _________________________________________________________________ Conheça os novos produtos Windows Live! Clique aqui. http://www.windowslive.com.br |
From: Per W. <pw...@ia...> - 2009-08-01 12:40:03
|
It is more efficient to have a local cache and reuse released memory than to send it back to Windows and directly reclaim it again. It is normally not good to preallocate all memory in advance unless the program is known to always consume almost the same amount of memory. A big preallocation will just punish other applications, while potentially slowing down the startup of your program while Windows swaps out data to be able to fulfill your allocations. Another thing is that the OS can often instantly allocate large chunks of memory without actually committing any RAM - your application does just get an address range, but no mapping of memory happens until your program tries to access the individual memory pages. On first access of each page, one memory page gets allocated into the address range. That means that to actually preallocate data, you also need to write to the memory - for example by clearing it to zero. And allocating and clearing a very large block of memory can get your application to seemingly hang for quite some time, while Windows finds something to throw out to be able to find unused RAM to map into your application space. Many applications that needs large numbers of allocations/releases are doing incremental allocations, and grouping the allocated objects into different block sizes. When a block is released, it is sent to a list of blocks of that size. When needing a block, the program checks if this list has any available block. If not, the program may allocate an array of 10 or 100 blocks using a single Windows allocation, and then splits the large block into the 10 or 100 smaller blocks and addes them to the free list. For just tracing the memory blocks your program uses, there are several free libraries available. They can, on exit of the application, tell you of any memory leaks. But such libraries don't work well with an application that performs allocations of large blocks and then splits these large blocks into smaller blocks. The library will then just see all these large blocks, and consider them all to be memory leaks. The reason is that if you split one large block into many small, it is very hard for your program to later figure out if all the small blocks has been released so that you may unlink them from the free list and release this big block. In the end, it is very hard to talk about a general solution that is applicable in all cases. What is best will depend exactly on the access patterns of the program. Quotient between small and large allocations. Total number of allocations. Amount of allocations in relation to releases. If allocations and releases are randomly distributed during the runtime or if the allocation patterns looks like waves where the program makes a lot of allocations and then a lot of releases before starting the next wave. /pwm On Fri, 31 Jul 2009, Frederico Marques wrote: > > Hello, > > Again, I seek your advice on a personal project : > I have searched for information related to the impact that managing, or more accurately not managing memory, will have on processes in win32. > Recently I have stumbled upon a paper disserting the effects of the paging process on windows, and the cost of requesting more resources than those currently addressed by the front-end virtual memory manager. > As I envision, the final code will use a great deal of memory, and actually, it will not always be coded by me, and as such, I cannot be certain that the allocated resourced will ever be freed. > I am quite the control freak, so I'm trying to implement one of two designs : > > a ) The software will contain one to several objects that allocate memory, at program start, based on the latest data retrieved from executions with high volume processing, or by other words, I'll subject my code to stress-tests, and register memory usage, for reserving it, on next program restart. I'll, obviously, be managing the memory, providing by one defined interface, my own allocation/freeing routines that will manage the already reserved memory(from the OS). > b ) All objects within the software will, somehow ( Haven't yet figured this out ) manage its own memory, having this way some information about the memory, besides its start address and the span of it. > > > To sum up, and finally presenting my question : which design would be better, and which would provide more information about the memory usage? > The above points are, obviously, based on my belief that the system( OS, L2,L1 caches, processor, ram ) will handle better paging out/in virtual memory, than actually reserving it at a latter state, especially, at different points in program logic, since the locality is not observed, on, for instance, linked-list implementations, and processor-cycles will be 'wasted' in finding available memory in kernel/c-runtime area. > > Please do correct me if my assumptions are not correct, and, please advice me if you have measurable results related to such implementations... This is a pet project, and although I am pursuing it by sheer fun, I believe it can be brought to good use. > > > Sincerely > Frederico Marques > > _________________________________________________________________ > Com o Windows Live, você pode organizar, editar e compartilhar suas fotos. > http://www.microsoft.com/brasil/windows/windowslive/products/photo-gallery-edit.aspx |
From: Merlin V. <mer...@ya...> - 2009-08-01 11:09:07
|
Saludos a todos, Uno de los módulos de una aplicación debe descargar ficheros de internet con una conexión no muy buena. Quisiera saber si me podrían dar una ayuda con esto, alguna idea de por dónde comenzar o un artículo. saludos merlin |