Standard ASCII Code
Standard ASCII Code
from zero to 127 and uses 7 bits for each char. Computer uses ASCII code to represent texts that makes it possible to transfer data from one computer to another. For a list of commonly used chars and their ASCII equivalent refer to the ASCII table. ASCII format stored files is called ASCII files. ASCII format is not the default storage format but its usually capable of storing data. Data files that contain numeric data and executable programs are not stored in ASCII format.
Extended ASCII code: There are several larger chars sets that use 8 bits which gives them 128 additional chars. The extra chars are used to represent non-English chars mathematical symbols and graphical symbols. Several companies and organizations have proposed extension for these 128 chars. The dos OS uses a super set of ASCII called Extended set or high ASCII The ISO Latin set of chars is more universal standard which is used by many operating systems.
ANSI(Americal National Standard Institute): Ensures that fair and open procedures are followed by accrediting a standard developer is the developers formal written procedures meet ANSII essential requirements for fairness and by vetting the report documenting the development of a standard and approving that standard is the report show that the approved procedures had been followed. ANSII does not judge the content of a document only the process used in writing it.
UNICODE: The Windows character set uses 8 bits to represent each character; therefore, the maximum number of characters that can be expressed using 8 bits is 256 (2^8). This is usually sufficient for Western languages, including the diacritical marks used in French, German, Spanish, and other languages. However, Eastern languages employ thousands of separate characters, which cannot be encoded by using a single-byte coding scheme. With the proliferation of computer commerce, double-byte coding schemes were developed so that characters could be represented in 8-bit, 16-bit, 24-bit, or 32-bit sequences. This requires complicated passing algorithms; even so, using different code sets could yield entirely different results on two different computers. To address the problem of multiple coding schemes, the Unicode standard for data representation was developed. A 16-bit character coding scheme, Unicode can represent 65,536 (2^16) characters, which is enough to include all languages in computer commerce today, as
well as punctuation marks, mathematical symbols, and room for expansion. Unicode establishes a unique code for every character to ensure that character translation is always accurate.