C
C
Find the
numbers which aren�t present.
here is an implementation in C.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#include<stdio.h>
#include<conio.h>
void main()
{
int a[10],b[10];
int number,i,j;
a[0]=0;
a[1]=1;
a[2]=2;
a[3]=3;
a[4]=4;
a[5]=4;
a[6]=6;
a[7]=7;
a[8]=8;
a[9]=9;
for(i=0;i<10;i++)
b[i]=0;
for(i=0;i<10;i++)
{
number=i;
for(j=0;j<10;j++)
{
if(a[j]==number)
{
b[i]=1;
continue;
}
}
}
for(i=0;i<10;i++)
if(b[i]==0)
printf("%dn",i);
getch();
}
#include
#include
void main()
{
int
char s;
n=strlen(s);
for(i=0;i<n;i++)
{
for(j=1+1;j<n;j++)
{
if(str[i]==str[j])
{
p='\0';
break;
}
else
{
q=str[i];
}
if(p!='\0'&& j=strlen(str))
{
break;
}
}
}
if(p!='\0')
nonrepeated charecter
ELSE
no non repeted chareter
You can distinguish slightly between array objects, and array types. Often people
use array objects which are allocated
with malloc, and used via a pointer to the first element. But C does also have
specific types for arrays of different
sizes, and also for variable-length-arrays, whose size is set when they are
created. VLAs have a slightly misleading name:
the size is only "variable" in the sense that it isn't fixed at compile time. It
can't change during the lifetime of the
object.
WHAT IS THE TIME AND SPACE COMPLEXITIES OF MERGE SORT WHEN IT IS PREFERRED OVER
QUICK SORT??
Merge sort is very efficient for immutable datastructures like linked lists
and
Quick sort is typically faster than merge sort when the data is stored in memory.
However, when the data set is huge and is stored on external devices such as a hard
drive, merge sort is the clear winner in terms of speed. It minimizes the expensive
reads of the external drive
and
when operating on linked lists, merge sort only requires a small constant amount of
auxiliary storage
Quick sort works well for sorting in-place. In particular, most of the operations
can be defined in terms of swapping
pairs of elements in an array. To do that, however, you normally "walk" through the
array with two pointers
(or indexes, etc.) One starts at the beginning of the array and the other at the
end. Both then work their way toward the
middle (and you're done with a particular partition step when they meet). That's
expensive with files, because files are
oriented primarily toward reading in one direction, from beginning to end. Starting
from the end and seeking backwards is
usually relatively expensive.
At least in its simplest incarnation, merge sort is pretty much the opposite. The
easy way to implement it only requires
looking through the data in one direction, but involves breaking the data into two
separate pieces, sorting the pieces,
then merging them back together.
With a linked list, it's easy to take (for example) alternating elements in one
linked list, and manipulate the links
to create two linked lists from those same elements instead. With an array,
rearranging elements so alternating elements
go into separate arrays is easy if you're willing to create a copy as big as the
original data, but otherwise rather more
non-trivial.
Merge sort is faster in these situations because it reads the items sequentially,
typically making log2(N) passes over
the data. There is much less I/O involved, and much less time spent following links
in a linked list.
Quicksort is fast when the data fits into memory and can be addressed directly.
Mergesort is faster when data won't fit
into memory or when it's expensive to get to an item.
Note that large file sorts typically load as much as they can of a file into
memory, Quicksort that and write it out to a
temporary file, and repeat until it has gone through the entire file. At that
point there is some number of blocks, each
one of which is sorted, and the program then does a N-way merge to produce the
sorted output.