Cs 609
Cs 609
81
The filter function identifies the exception type and based on the type of the exception, the
handler can treat each exception differently. In the following program, the exception
handling and termination of a program are illustrated using a filter function.
The program generates an exception based on the type of the exception entered by the user.
The floating point exceptions are enabled with the help of controlfp function and old status
is saved in the fpOld. The try block mentions the different cases of exception generation
with the help of a switch statement.
Now we will see how the values in the ecategory reference variable are placed.
User generated exceptions are identified based on the masking operation generating zero as
a result.
Lesson 59 : Console Control Handlers
Console control handlers are quite similar to the mechanism of exception handlers. Normal
exceptions respond to several asynchronous events like division by zero, invalid page fault
etc., but they do not respond to console related events like Ctrl+C. Console control handlers
can detect and handle the events that are console related. The API
SetConsoleCtrlHandler() is used to add console handlers.
The API takes the address of the HandlerRoutine and Add Boolean as parameters. There
can be a number of handler routines if the Add parameter is set as TRUE. If the
HandlerRoutine parameter is set as NULL and Add is TRUE, then the Ctrl-C signal will be
ignored.
The return type of HanlderRoutine should be Boolean taking only one parameter of type
DWORD. The HandlerRoutine() function takes a DWORD as a parameter and returns a
Boolean value.
The handler routine will be invoked if a console exception is detected. Handler routine runs
in an independent thread within the process. Raising an exception within the handler
routine will not interfere with the working of the original routine that created the handler
routine. Signals apply to the whole process, while exception applies to a single thread.
Usually signal handlers are used to perform cleanup tasks whenever a shutdown, close or
logoff events are detected. Signal Handler would return a TRUE value in case it takes care
of the task or it may return FALSE. In case of FALSE, the next handler in the chain is
invoked. Signal handler chain is invoked in the reverse order of which they are set up in
and the system signal handler is the last one in this chain.
/* Chapter 4. CNTRLC.C */
#include "Everything.h"
if (!SetConsoleCtrlHandler(Handler, TRUE))
return 0;
switch (cntrlEvent) {
case CTRL_C_EVENT:
exitFlag = TRUE;
case CTRL_CLOSE_EVENT:
exitFlag = TRUE;
default:
exitFlag = TRUE;
The program can be terminated by the user either by closing the console or with a Ctrl-C.
The handler will register with windows using the SetConsoleCtrlHandler(Handler, TRUE)
function. The handler will be activated upon the occurrence of any console event. If the
registration of any handler fails due to any reason, then an error message will be printed.
Exception handling functions can be directly associated with an exception just like console
handlers. In case a vectored exception is set up, then windows first looks for Vectored Exception
Handlers (VEH) and then unwinds the stack.
No __try and __catch keywords are required with VEH and they are like console control
handlers. Windows provides a set of APIs for VEH management as follows.
PVOID WINAPI AddVectoredExceptionHandler(ULONG FirstHandler,
PVECTORED_EXCEPTION_HANDLER VectoredHandler);
The given API has two parameters; FirstHandler is the parameter used to specify the order in
which the handler executes. Non-zero value indicates that it will be the first one to execute and
zero specifies it to be the last. If there are more than one handlers setup with zero value, then
they will be invoked in the order they are added using AddVectoredExceptionHandler(). Return
value is NULL in case of failure, otherwise it returns the Handle to the Vectored Exception
Handler.
The exception handler function should be fast and must return as quickly as possible, therefore it
should not have a lot of code. The VEH should neither perform any blocking operation like the
Sleep() function nor use any synchronization objects. Typically a VEH would access exception
information structure, do some minimal processing, and set a few flags.
Dynamic Memory
Need of dynamic memory arises whenever dynamic data structures like search tables, trees,
linked lists etc. are used. Windows provides a set of APIs for handling dynamic memory
allocation.
Windows also provides memory mapped files which allows direct movement of data to and from
user space and files without the use of file APIs. Memory Mapped files can help conveniently
handle dynamic data structure and make file handling faster because they are treated just like
memory. It also provides a mechanism for memory sharing among processes.
Windows essentially uses two API platforms i.e. Win32 and Win64.
The Win32 API uses pointers of size 32 bits, hence the virtual space is 2^32. All data types have
been optimized for 32 bit boundaries. Win64 uses a virtual space of 2^64 (16 Exabytes).
A good strategy is to design an application in such a way that it could run in both modes without
any change in code.
Win32 makes at least half of the virtual space (8GB) accessible to a process and the rest of the
space is reserved by the system for shared data, code, and drivers etc. Overall Windows provides
a large memory space available to user programs and hence requires optimal management.
Dynamic Memory
Need of dynamic memory arises whenever dynamic data structures like search tables, trees,
linked lists etc. are used. Windows provides a set of APIs for handling dynamic memory
allocation.
Windows also provides memory mapped files which allows direct movement of data to and from
user space and files without the use of file APIs. Memory Mapped files can help conveniently
handle dynamic data structure and make file handling faster because they are treated just like
memory. It also provides a mechanism for memory sharing among processes.
Windows essentially uses two API platforms i.e. Win32 and Win64.
The Win32 API uses pointers of size 32 bits, hence the virtual space is 2^32. All data types have
been optimized for 32 bit boundaries. Win64 uses a virtual space of 2^64 (16 Exabytes).
A good strategy is to design an application in such a way that it could run in both modes without
any change in code.
Win32 makes at least half of the virtual space (8GB) accessible to a process and the rest of the
space is reserved by the system for shared data, code, and drivers etc. Overall Windows provides
a large memory space available to user programs and hence requires optimal management.
Further information about the parameters of Windows Memory Management can be probed
using the following API.
The API returns a pointer to SYSTEM_INFO structure. The structure contains various
information regarding the system like page size, granularity, and application’s physical memory
address space.
Lesson 64 : Introduction to Heaps
A programmer allocates memory dynamically from a heap. Windows maintains a pool of heaps
and a process can have many heaps. Traditionally, one heap is considered enough. But several
heaps may be required to make a program more efficient.
In case a single heap is sufficient, then a runtime library function for heap allocation like
malloc(), free(), calloc(), realloc() might be enough.
Heap is a windows object and hence is accessed by a handle. Whenever you require allocating
memory from heap, you need a heap handle. Every process in windows has a default heap which
can be accessed through the following API.
HANDLE GetProcessHeap(VOID)
The API returns a handle to the process heap. NULL is returned in case of failure and not
INVALID_HANDLE_VALUE.
However, due to a number of reasons it would be desirable to have more than one heap.
Sometimes it is convenient to have distinct heaps for different data structures.
Separate Heaps
1. If a distinct heap is assigned to each thread, then each thread will only be able to use the
memory allocated to each thread.
3. Fragmentation is reduced when one fixed size data structure is allocated from a single
heap.
5. If a single heap contains complex data structures, then they can be easily de-allocated with
a single API call by de-allocating the entire heap. We will not need complex de-allocation
algorithms in such cases.
6. Small heaps for a single data structure reduces the chances of page faults as per the
principle of locality.
Lesson 65 : Creating Heaps
We can create a new heap using HeapCreate() API and its size can be set to zero. The API
adjusts the heap size to the nearest multiple of page size. Memory is committed to the heap
initially, rather than on demand. In case the memory requirements increase than the initial, more
pages will automatically be allocated to the heap up to maximum size allowed.
If the required memory is not known, then deferring memory commitment is a good practice as
heap is a limited resource. Following is the syntax of the API used to create new heaps.
dwMaximumSize if non-zero, determines the maximum limit of the heap memory set by the
user. Heap is not grow-able beyond this point. In case it’s zero, then the heap is grow-able to the
extent of the virtual memory space available for the heap.
dwInitialSize is the initial size of the heap set by the programmer. SIZE_T is used to enable
portability. Based on the win32 or win64 platforms, SIZE_T will be 32 or 64 bit wide.
BOOL HeapDestroy(
HANDLE hHeap
);
hHeap is the handle to a previously created heap. Do not use the handle obtained from
GetProcessHeap() because it may raise an exception. This is an easy way to get rid of all the
contents of the heap including complex data structures.
Once a heap is created, it does not allocate memory that is directly available to the program.
Rather, it only creates a logical structure of heap that will be used to allocate new memory
blocks. Memory blocks are allocated using heap memory allocation APIs like HeapAlloc() and
HeapReAlloc().
hHeap is the handle of the heap from which memory is to be allocated. dwFlags are quite similar
to the flags used in HeapCreate().
HEAP_GENERATE_EXCEPTIONS: This flag will raise exceptions in case there is any failure
while allocating memory to heap. Exceptions are not generated by CreateHeap(), rather they may
occur at the time of allocation.
dwBytes is the size of the memory block to be allocated. For non-grow-able heap, its 0x7FFF8
approximately equivalent to 0.5 MB.
The return value of the function is LPVOID. This is the address of the allocated memory block.
Use this pointer in a formal way and there is no need to make any reference to the Heap handle.
If the exception flag is not set, then NULL is returned by HeapAlloc() and the GetLastError()
does not work on HeapAlloc().
BOOL HeapFree(HANDLE hHeap, DWORD dwFlags,
hHeap is the heap handle from which memory is to be allocated. dwFlags should be 0 or set to
HEAP_NO_SERIALIZE. lpMem should be the pointer previously returned by HeapAlloc() or
HeapReAlloc().
Return value of FALSE will indicate a failure. GetLastError() can be used to get the error.
HEAP_ZERO_MEMORY: only the newly allocated memory is set to zero (in case dwBytes is
greater than the previous allocation).
lpMem: specifies the pointer to the block previously allocated to the same heap hHeap.
dwBytes: It refers to the block size to be allocated that can be lesser or greater than the previous
allocation. But the same restriction as HeapAlloc applies i.e. the block size cannot be greater
than 0x7FFF8.
Some programs may require to determine the size of allocated blocks in the Heap. The size of the
allocated block is determined using the API HeapSize() as follows.
SIZE_T HeapSize(
HANDLE hHeap,
DWORD dwFlags,
LPCVOID lpMem
);
The function returns the size of the block or zero in case of failure. The only valid dwFlag is
HEAP_NO_SERIALIZE.
Serialization
Serialization is required when dealing with concurrent threads using some common resource.
Serialization is not required if threads are autonomous and there is no possibility of concurrent
threads disrupting each other.
b. Each thread has its own heap that is insulated from other threads.
Heap Exceptions
Heap exceptions are enabled using the flag HEAP_GENERATE_EXCEPTION. This allows the
program to close open “handles” before a program terminates. There can be two scenarios with
this option:
There are some other functions that can be used while working with heaps. For example,
HeapSetInformation() can be used to enable low fragmentation mode. It can also be used to
allow termination of a thread upon heap’s corruption.
Till now we have used the Memory Management APIs for allocating, reallocating and
deallocating heaps. We can also get the size of a heap through an API.
A typical methodology of dealing with heaps should be to get a heap handle either using
HeapCreate() or GetProcessHeap(). Use the handle obtained from the above to allocate memory
blocks from the heap using HeapAlloc(). If some block needs to be deallocated, use HeapFree().
Before the program is terminated or when the heap is not required, use HeapDestroy() to dispose
of the heap.
It is convenient not to mix up windows heap API and Run Time Library functions. Anything
allocated with C library functions should also be deallocated with C library functions.
The example is formulated using two heaps. The first one will be a node heap, while the other is
a record heap. Node heap will be used to build a tree, while the data heap will be used to store
keys.
Following are the three heaps as shown in the figure:
1. ProcHeap
2. RecHeap
3. NodeHeap
ProcHeap contains the root address, but RecHeap stores the records. NodeHeap on the other
hand stores the nodes when they are created. Sorting will be performed in the NodeHeap that
gives a reference to be searched in the RecHeap. The data structure will be maintained in the
NodeHeap. At the end, all the heaps will be destroyed except ProcHeap because it is created
using GetProcessHeap().
/* sort files
/* Technique:
1. Scan the input file, placing each key (which is fixed size)
into the binary search tree and each record onto the data heap.
#include "Everything.h"
#define KEY_SIZE 8
TCHAR key[KEY_SIZE];
LPTSTR pData;
LPTNODE pRoot;
BOOL noPrint;
CHAR errorMessage[256];
if (hIn == INVALID_HANDLE_VALUE)
__try {
hNode = HeapCreate (
HEAP_GENERATE_EXCEPTIONS | HEAP_NO_SERIALIZE,
NODE_HEAP_ISIZE, 0);
hData = HeapCreate (
HEAP_GENERATE_EXCEPTIONS | HEAP_NO_SERIALIZE,
DATA_HEAP_ISIZE, 0);
if (!noPrint) {
Scan (pRoot);
hIn = INVALID_HANDLE_VALUE;
/* Handle the exceptions that we can expect - Namely, file open error or out of
memory. */
? EXCEPTION_EXECUTE_HANDLER :
EXCEPTION_CONTINUE_SEARCH)
return 0;
DWORD nRead, i;
BOOL atCR;
TCHAR dataHold[MAX_DATA_LEN];
LPTSTR pString;
while (TRUE) {
pNode->pData = NULL;
dataHold[i - 1] = _T('\0');
pString[KEY_SIZE] = _T('\0');
pNode->pData = pString;
/* Insert the new node, pNode, into the binary search tree, pRoot. */
if (*ppRoot == NULL) {
*ppRoot = pNode;
return TRUE;
Else
return TRUE;
if (pNode == NULL)
return TRUE;
Scan (pNode->Left);
Scan (pNode->Right);
return TRUE;
Dynamic memory is allocated from the paging file. The paging file is controlled by the
Operating System’s (OS) virtual memory management system. Also the OS controls the mapping
of virtual address onto physical memory. Memory mapped files help to directly map virtual
memory space onto a normal file.
There is no need to invoke direct file Input Output (IO) operations. Any data structure placed in
file will be available for later use as well. It is convenient and efficient to use in-memory
algorithms for sorting, searching etc. Large files could be processed as if they are placed in
memory. File processing is faster than ReadFile() and WriteFile(). There is no need to manage
buffers for repetitive operation on a file. This is more optimally done by OS. Multiple processes
can share memory space by mapping their virtual memory space onto a file. For file operations,
page file space is not needed.
Other considerations
Windows also use memory mapping while implementing Dynamic Link Libraries (DLLs) and
loading & executing executable (EXE) files. It is strongly recommended to use SHE exception
handling while dealing with memory mapped file to look for EXCEPTION_IN_PAGE_ERROR
exceptions.
In order to perform memory mapped file IO operations, file mapping objects need to be created.
This object uses the file handle of an open file. The open file or part of the file is mapped onto
the address space of the process. File mapping objects are assigned names so that they are also
available to other processes. Moreover, these mapping objects also require protection and
security attributes and a size. The API used for this purpose is CreateFileMapping().
LPSECURITY_ATTRIBUTES lpFileMappingAttributes,
DWORD flProtect,
DWORD dwMaximumSizeHigh,
hFile is the open handle to file compatible with protection flag dwProtect
PAGE_READONLY - It means that page can only be read within the mapped region. It can
neither be written nor executed. hFile must have GENERIC_READ access.
PAGE_READWRITE - Provides full access to object if the hFile has GENERIC_READ and
GENERIC_WRITE access.
PAGE_WRITECOPY - Means that when a mapped region changes, a private copy is written to
the paging file and not to the original file.
dwMaximumSizeHigh and dwMaximumSizeLow specify the size of the mapping object. If set to
0, then the current file size is used. Carefully specify this size in the following cases:
· If the file size is expected to grow, then use the expected file size.
· Do not map a region beyond this limit. Once the size is assigned the mapping region
cannot grow.
· The mapping size needs to be specified in the form of two 32-bit values rather than one
64-bit value.
· lpMapName is the name of the map that can also be used by other processes. Set this to
NULL if you do not mean to share the map.
Previously, we discussed that a file mapping can be assigned a shared name by using
CreateFileMapping(). This shared name can be used to open existing file maps using
OpenFileMapping(). A file map created by a certain process can be subsequently used by other
processes by referring to the object by name. The operation may fail if the name does not exist.
BOOL bInheritHandle,
LPCSTR lpMapName );
Previously, file mapping was created or a handle to already created mapping handle was
obtained. The next step is to assign a process address space to file mapping. In case of heaps,
first heap was created and then HeapAlloc() was used to allocate space within heap. Similarly
once the file mapping is created, MapViewOfFile() is used to define file view block.
DWORD dwDesiredAccess,
DWORD dwFileOffsetHigh,
DWORD dwFileOffsetLow,
SIZE_T dwNumberOfBytesToMap );
FILE_MAP_WRITE
FILE_MAP_READ
FILE_MAP_ALL_ACCESS
dwFileOffsetHigh and dwFileOffsetLow give the starting address of the file from where the
mapping starts. To start the mapping from the start of a file, set both as zero. This value should
be specified in multiples of 64K.
dwNumberOfBytesToMap shows the number of bytes of file to map. Set as zero to map the
whole file.
If the function is successful it returns the base address of the mapped region. If the function fails,
the return value is NULL.
As it is necessary to release Heap blocks with HeapFree(), it is also necessary to unmap file
views. File views are unmapped using UnmapViewOfFile().
lpBaseAddress is the pointer to the base address of the mapped view. If the function fails, the
return value is zero.
The file view can be flushed using the FlushViewOfFile() API. This will force the OS to
writeback the dirty pages of the file on to disk. In case two processes try to access a file at a time
such that one uses file mapping and other uses ReadFile() and WriteFile(). Then both processes
may not receive the same view. Changes made through file maps might still be in memory and
may not be accessible through ReadFile or WriteFile unless they are flushed. To get a uniform
view, it is necessary that all the processes use file maps.
Lesson 75 : More about file Mapping
In Win32, it is not possible to map files bigger than 2-3 GB. Also the entire 3GB might not be
available for merely file space. The above limitation is alleviated in Win64. File mapping cannot
be extended. You need to know the size of a map in advance. Customized functions would be
required to allocate memory within the mapped region.
The following minimum steps need to be taken while working with mapped files:
· Open File with at least GENERIC_READ access.
· If the file is new then set its length as some non-zero value using SetFilePointerEx()
followed by SetEndOfFile().
· In the end, unmap file view with UnmapViewOfFile() and use CloseHandle() to close map
and file handles.
Accessing a file through file mapping presents visible advantages. Although the setting up of
fileviews might be programmatically complex, the advantages are far bigger. The processing
time may reduce 3 folds as compared to conventional file operations while dealing with
sequential files. These advantages may only seem to disappear if the size of input and output
files is too large. The example is a simple Ceasar cipher application. It sequentially processes all
the characters with the file. It simply substitutes each character by shifting it a few places in the
ASCII set.
/* Chapter 5.
#include "Everything.h"
__try {
LARGE_INTEGER fileSize;
OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL,
NULL);
if (hIn == INVALID_HANDLE_VALUE)
/* Create a file mapping object on the input file. Use the file size.
*/
if (hInMap == NULL)
* allow you to deal with very large files on 32-bit systems. I have
not taken
*/
if (pInFile == NULL)
/* The output file MUST have Read/Write access for the mapping
to succeed. */
0, NULL, CREATE_ALWAYS,
FILE_ATTRIBUTE_NORMAL, NULL);
if (hOut == INVALID_HANDLE_VALUE) {
if (hOutMap == NULL)
if (pOutFile == NULL)
/* Now move the input file to the output file, doing all the work in
memory. */
__try
pOut = pOutFile;
pIn++; pOut++;
complete = TRUE;
__except(GetExceptionCode() ==
EXCEPTION_IN_PAGE_ERROR ? EXCEPTION_EXECUTE_HANDLER :
EXCEPTION_CONTINUE_SEARCH)
complete = FALSE;
return complete;
__except (EXCEPTION_EXECUTE_HANDLER) {
if (pOutFile != NULL) UnmapViewOfFile (pOutFile); if (pInFile
!= NULL) UnmapViewOfFile (pInFile);
if (!complete)
DeleteFile (fOut);
return FALSE;
Content:
Another advantage of memory mapping is the ability to use convenient memory based
algorithms to process files. Sorting data in memory, for instance, is much easier than
sorting records in a file.
Program explained in topic 77 sorts a file with fixed-length records. This program, called
sortFL, is similar to Program explaining example of sorting with binary search tree that it
assumes an 8-byte sort key at the start of the record, but this example is restricted to
fixed records.
Figure T77-1: File Conversion with Memory-Mapped Files
The sorting is performed by the <stdlib.h> C library function qsort. Notice that qsort
requires a programmer-defined record comparison function.
This program structure is straightforward. Simply create the file mapping on a temporary
copy of the input file, create a single view of the file, and invoke qsort. There is no file
I/O. Then the sorted file is sent to standard output using _tprintf, although a null
character is appended to the file map.
Exception and error handling are omitted in the listing but are in the Examples solution
on the recommended book’s Website.
Content:
File maps are convenient, as the preceding examples demonstrate. Suppose, however,
that the program creates a data structure with pointers in a mapped file and expects to
access that file in the future. Pointers will all be relative to the virtual address returned
from MapViewOfFile, and they will be meaningless when mapping the file the next time.
The solution is to use based pointers, which are actually offsets relative to another
pointer. The Microsoft C syntax, available in Visual C++ and some other systems, is:
Notice that the syntax forces use of the *, a practice that is contrary to Windows
convention but which the programmer could easily fix with a typedef.
Previously we developed a program that sorted record stored in memory mapped file. Sorting
huge files each time they are accessed is not feasible, so permanent indexes are maintained on
required key.
Using the address returned by MapViewOfFile() for maintaining indexes is meaningless as the
address is liable to change in each call to API. A simple methodology is to maintain an array of
record and then build an index for the records. And subsequently use the index to access
records directly.
The program uses record of varying sizes in a file. It uses the first field of each record as the key
of 8 characters. There are two file mapping. One mapping maps the original file and the other
maps the index file. Each record in index file contains a key and the pointer location into the
original file for the record containing that key. Once index file is created it can be easily used
later. Subsequently, index file records can be sorted for faster searching. The input file remains
unchanged. Pictorial Representation of the example is attached below;
Lesson 80 : Dynamic Link Libraries (DLLs)
We have previously seen the example use of memory mapped files in windows. This a
fundamental feature of windows. Windows itself uses this feature while working with Dynamic
Link Libraries (DLLs). DLLs are one of the most important components of windows on which
many high-level technologies depend like COM.
Static Linking
The conventional approach is to gather all the source code and library functions attach them
and encapsulate them into a single executable file. This approach is simple but has few
disadvantages.
● The executable image will be large as it contains all library functions. Hence it will
consume more disk space and will require large physical memory to run.
● If a library function updates the whole program will require recompilation.
● There can be many programs that require a library function. Each program will have
static copy of its own. Hence, resources requirement will increase.
● It will reduced portability as a program compiled with certain environment setting will run
same functions in different environment where some other version might be.
As a result the size of executable package is smaller. DLLs can be easily used to create shared
libraries which can be used by multiple programs concurrently. Only a single copy of shared
DLLs is placed in memory. All the processes sharing the DLL map the DLL space onto their
program space. Each program will have its own copy of DLL global variables.
New versions or updates can be simply supported by just providing a new DLL without the need
of recompiling main code. The library runs in the same processes as the calling the program.
Importance of DLLs
DLLs are used in almost all modern operating systems. DLLs are most important in case of
windows as they are used to implement OS interfaces. The entire Windows API is supported by
a set of DLLs which are invoked to call kernel services. The DLL code can be shared by multiple
processes. DLL function when invoked by a process runs in process space. Therefore, it can
use resources of the calling process such as file handles and thread stack. DLLs should be
written in Thread-safe manner. A DLL exports variables as well as function entry points.
Implicit linking is the easier of the two techniques. Functions defined in a DLL are collected and
build as DLL. The build process builds a .LIB file which is a stub for actual code. The stub is
linked to the calling program at build time. It provides a place holder for each function in the
DLL.
The place holder/stub will call the original function in the DLL. This file should be placed in
common user library directory for the project. The build process also constructs the DLL that
contain the original binary image for the functions. This File is usually placed in the same
directory as the application.