Outro may refer to:
In music, the conclusion is the ending of a composition and may take the form of a coda or outro.
Pieces using sonata form typically use the recapitulation to conclude a piece, providing closure through the repetition of thematic material from the exposition in the tonic key. In all musical forms other techniques include "altogether unexpected digressions just as a work is drawing to its close, followed by a return...to a consequently more emphatic confirmation of the structural relations implied in the body of the work."
For example:
Outro is a 2002 album by Jair Oliveira. Jair’s second album blends jazz, samba, soul and MPB. Most of Outro's songs were co-written by fellow Brazilian singer and composer Ed Motta.
Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. Developed in conjunction with the Universal Coded Character Set (UCS) standard and published as The Unicode Standard, the latest version of Unicode contains a repertoire of more than 120,000 characters covering 129 modern and historic scripts, as well as multiple symbol sets. The standard consists of a set of code charts for visual reference, an encoding method and set of standard character encodings, a set of reference data files, and a number of related items, such as character properties, rules for normalization, decomposition, collation, rendering, and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic and Hebrew, and left-to-right scripts).As of June 2015, the most recent version is Unicode 8.0. The standard is maintained by the Unicode Consortium.
Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including modern operating systems, XML, the Java programming language, and the Microsoft .NET Framework.
UTF-8 is a character encoding capable of encoding all possible characters, or code points, in Unicode.
The encoding is variable-length and uses 8-bit code units. It was designed for backward compatibility with ASCII, and to avoid the complications of endianness and byte order marks in the alternative UTF-16 and UTF-32 encodings. The name is derived from: Universal Coded Character Set + Transformation Format – 8-bit.
UTF-8 is the dominant character encoding for the World Wide Web, accounting for 86.2% of all Web pages in January 2016 (with the most popular East Asian encoding, GB 2312, at 0.9% and Shift JIS at 1.1%). The Internet Mail Consortium (IMC) recommends that all e-mail programs be able to display and create mail using UTF-8, and the W3C recommends UTF-8 as the default encoding in XML and HTML.
UTF-8 encodes each of the 1,112,064 valid code points in the Unicode code space (1,114,112 code points minus 2,048 surrogate code points) using one to four 8-bit bytes (a group of 8 bits is known as an octet in the Unicode Standard). Code points with lower numerical values (i.e., earlier code positions in the Unicode character set, which tend to occur more frequently) are encoded using fewer bytes. The first 128 characters of Unicode, which correspond one-to-one with ASCII, are encoded using a single octet with the same binary value as ASCII, making valid ASCII text valid UTF-8-encoded Unicode as well. And ASCII bytes do not occur when encoding non-ASCII code points into UTF-8, making UTF-8 safe to use within most programming and document languages that interpret certain ASCII characters in a special way, e.g. as end of string.
UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 possible characters in Unicode. The encoding is variable-length, as code points are encoded with one or two 16-bit code units. (also see Comparison of Unicode encodings for a comparison of UTF-8, -16 & -32)
UTF-16 developed from an earlier fixed-width 16-bit encoding known as UCS-2 (for 2-byte Universal Character Set) once it became clear that a fixed-width 2-byte encoding could not encode enough characters to be truly universal.
In the late 1980s, work began on developing a uniform encoding for a "Universal Character Set" (UCS) that would replace earlier language-specific encodings with one coordinated system. The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music. The original idea was to expand the typical 256-character encodings requiring 1 byte per character with an encoding using 216 = 65,536 values requiring 2 bytes per character. Two groups worked on this in parallel, the IEEE and the Unicode Consortium, the latter representing mostly manufacturers of computing equipment. The two groups attempted to synchronize their character assignments, so that the developing encodings would be mutually compatible. The early 2-byte encoding was usually called "Unicode", but is now called "UCS-2".