![]() |
This article's introduction may be too long for its overall length. Please help by moving some material from it into the body of the article. For more information please read the layout guide and Wikipedia's lead section guidelines. (May 2012) |
Overclocking is the process of making a computer or component operate faster than specified by the manufacturer by modifying system parameters. One of the most important techniques is running at a higher clock rate (more clock cycles per second; hence the name "overclocking"), but other parameters, such as CPU multiplier and memory timings, can also be changed and would be considered to be overclocking. Operating voltages may also be changed (increased), which can increase the speed at which operation remains stable. Most overclocking techniques increase power consumption, generating more heat, which must be carried away.
The purpose of overclocking is to increase the operating speed of given hardware. The trade-offs are an increase in power consumption and fan noise, the system can become unstable if the equipment is overclocked too much, and the risk of damage due to excessive overvoltage or heat generation. In extreme cases costly and complex cooling (e.g., water-cooling) is required.
Conversely, underclocking trades off slower operation to reduce power consumption and temperature, cooling requirements (and reducing the number and speed of fans makes for quiet operation) and, where relevant, increase battery life per charge. Some manufacturers underclock components of battery-powered equipment to improve battery life.
The speed gained by overclocking depends largely upon the application; benchmarks for different purposes are published. In some cases there is a simple speed gain and saving of time; in others a certain speed may be required for correct operation, as in displaying high-resolution video and playing games with fast action. The numerical gain varies, but is often of the order of 20%.
Many people overclock or 'rightclock' their hardware to improve its performance. This is practiced more by enthusiasts than professional users seeking an increase in the performance of their computers, as overclocking carries risks of less reliable functioning and damage. There are several purposes for overclocking: for professional users overclocking allows pushing the boundary of professional personal computing capacity therefore allowing improved productivity or allows testing over-the-horizon technologies beyond that possible with the available component specifications before entering the specialized computing realm and pricing - this leverages the manufacturing practice to specify components to a level that optimizes yield and profit margin, some components are capable of more; there are also hobbyists that, like car enthusiasts, enjoy building, tuning, and comparison racing their systems with standardized benchmark software. Some hobbyists purchase less expensive computer components and overclock to higher clock rates in an attempt to save money but achieve the same performance. A similar but slightly different approach to cost saving is overclocking outdated components to keep pace with new system requirements, rather than purchasing new hardware. If the overclocking stresses equipment to the point of failure, little is lost as it is fully depreciated, and would have needed to be replaced in any case.[1]
Computer components that may be overclocked include processors (CPU), video cards, motherboard chipsets, and RAM. Most modern CPUs increase their effective operating speeds by multiplying the system clock frequency by a factor (the CPU multiplier). CPUs can be overclocked by manipulating the CPU multiplier, and the CPU and other components can be overclocked by increasing the speed of the system clock or other clocks (such as a front-side bus (FSB) clock). As clock speeds are increased components will ultimately stop operating reliably, or fail permanently, even if voltages are increased to maximum safe levels. The maximum speed is determined by overclocking beyond the point of instability, then accepting a slightly lower setting. Components are guaranteed to operate correctly up to their rated values; beyond there different samples may have different overclocking potential.
CPU multipliers, bus dividers, voltages, thermal loads, cooling techniques and several other factors such as individual semiconductor clock and thermal tolerances can affect the speed, stability, and safe operation of the computer.[2]
Contents |
There are several things to be considered when overclocking. First is to ensure that the component is supplied with adequate power at a voltage sufficient to operate at the new clock rate. However, supplying the power with improper settings or applying excessive voltage can permanently damage a component. Since tight tolerances are required for overclocking, only more expensive motherboards have built-in overclocking capabilities. Motherboards with fewer features, such as those found in original equipment manufacturer (OEM) systems, often do not support overclocking.
In a professional production environment overclocking is only likely to be used where the increase in speed justifies the cost of the expert manpower required, the possibly reduced reliability and consequent effect of exceeding manufacturers' ratings on maintenance contracts and warranties, and the higher power consumption. If faster, but not the maximum possible, speed is required it is often cheaper when all costs are considered to buy faster hardware.
All electronic circuits produce heat generated by the movement of electrical current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example); this requires more cooling to avoid damaging the hardware by overheating. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Conversely, the overclocker may decide to decrease the chip voltage while overclocking (a process known as undervolting), to reduce heat emissions while performance remains optimal.
Stock cooling systems are designed for the amount of power produced during non-overclocked use; overclocked circuits can require more cooling, such as by powerful fans, larger heat sinks, heat pipes and water cooling. Mass, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive.[3] Aluminium is more widely used; it has good thermal characteristics, though not as good as copper, and is significantly cheaper. Cheaper materials such as steel do not have good thermal characteristics. Heat pipes can be used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.[3]
Water cooling carries waste heat to a radiator. Thermoelectric cooling devices which actually refrigerate using the Peltier effect can help with high thermal design power (TDP) processors made by Intel and AMD in the early twenty-first century. Thermoelectric cooling devices create temperature differences between two plates by running an electric current through the plates. This method of cooling is highly effective, but itself generates significant heat elsewhere which must be carried away, often by a convection-based heatsink or a water-cooling system.
Other cooling methods are forced convection and phase transition cooling which is used in refrigerators and can be adapted for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases,[4] such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technology jointly announced a new record in silicon-based chip clock rate (the rate a transistor can be switched at, not the CPU clock rate[5]) above 500 GHz, which was done by cooling the chip to 4.5 K (−268.7 °C; −451.6 °F) using liquid helium.[6] CPU Frequency World Record is 8.429 GHz as of September 2011.[7] These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can be formed on chilled components.[4] Moreover, silicon-based junction gate field-effect transistors (JFET) will degrade below temperatures of roughly 100 K (−173 °C; −280 °F) and eventually cease to function or "freeze out" at 40 K (−233 °C; −388 °F) since the silicon ceases to be semiconducting[8] so using extremely cold coolants may cause devices to fail.
Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components.[9] A good submersion liquid is Fluorinert made by 3M, which is expensive and can only be purchased with a permit.[9] Another option is mineral oil, but impurities such as those in water might cause it to conduct electricity.[9]
As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications, device drivers, or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.
In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor.[10] Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected.
A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.
To further complicate matters, in process technologies such as silicon on insulator (SOI), devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.[11]
In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically-intensive application for testing video cards, or different math-intensive applications for testing general CPUs). Popular stress tests include Prime95, Everest, Superpi, OCCT, IntelBurnTest/Linpack/LinX, SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will show up during these tests, and if no errors are detected during the test, the component is then deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable".
Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In many cases components are manufactured by the same process, and tested after manufacture to determine their actual maximum ratings. Components are then marked with a rating which tests show they meet under worst-case operating conditions. If manufacturing yield is high, more higher-rated components than required may be produced, and the manufacturer may mark and sell higher-performing components as lower-rated for marketing reasons. In this case many devices sold with a lower rating may behave in all ways as if higher-rated; in other cases worst-case operation at the higher rating may be more problematical. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".[12]
Benchmarks are used to evaluate performance. The benchmarks can themselves become a kind of 'sport', in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime95 as this has in-built error checking and the computer fails if unstable.
Given only benchmark scores it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark attempt to replicate game conditions.
Commercial system builders or component resellers sometimes overclock to sell items at higher profit margins. The seller makes more money by overclocking lower-priced components which are found to operate correctly and selling equipment at prices appropriate for higher-rated components. While the equipment will normally operate correctly, this practice may be considered fraudulent if the buyer is unaware of it.
Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory-overclocked versions of their graphics accelerators, complete with a warranty, usually at a price intermediate between that of the standard product and a non-overclocked product of higher performance.
It is speculated that manufacturers implement overclocking prevention mechanisms such as CPU locking to prevent users buying lower-priced items and overclocking them. These measures are sometimes marketed as a consumer protection benefit, but are often criticised by buyers.
Many motherboards are sold, and advertised, with extensive facilities for overclocking implemented in hardware and controlled by BIOS settings.[13]
![]() |
This section may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (December 2011) |
![]() |
This section may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (December 2011) |
Overclocking components can only be of noticeable benefit if the component is on the critical path for a process, if it is a bottleneck. If disc access or the speed of an Internet connection limit the speed of a process, a 20% increase in processor speed is unlikely to be noticed. Overclocking a CPU will not benefit a game limited by the speed of the graphics card.
While overclocking which causes no instability is not a problem, occasional undetected errors are a serious risk for applications which must be error-free, for example scientific or financial applications.
Graphics cards can be overclocked.[17] There are utilities to achieve this, such as EVGA's Precision, RivaTuner, ATI Overdrive (on ATI cards only), MSI Afterburner, Zotac Firestorm on Zotac cards, and the PEG Link Mode on Asus motherboards. Overclocking a GPU will often yield a marked increase in performance in synthetic benchmarks, usually reflected in game performance. It is sometimes possible to see that a graphics card is being pushed beyond its limits before any permanent damage is done by observing on-screen artifacts. Two such discriminated "warning bells" are widely understood: green-flashing, random triangles appearing on the screen usually correspond to overheating problems on the GPU itself, while white, flashing dots appearing randomly (usually in groups) on the screen often mean that the card's RAM is overheating[citation needed]. It is common to run into one of those problems when overclocking graphics cards; both symptoms at the same time usually means that the card is severely pushed beyond its heat, clock rate, or voltage limits (If seen when not overclocked they indicate a faulty card.) If the clock speed is excessive but without overheating the artifacts are different. There is no general rule, but usually if the core is pushed too hard black circles or blobs appear on the screen and overclocking the video memory beyond its limits usually results in the application or the entire operating system crashing. After a reboot video settings are reset to standard values stored in the video card firmware, and the maximum clock rate of that specific card is now known.
Some overclockers apply a potentiometer to the video card to manually adjust the voltage (which invalidates the warranty). This results in much greater flexibility, as overclocking software for graphics cards is rarely able to adjust the voltage. Excessive voltage increases may destroy the video card.
Flashing and unlocking can be used to improve performance of a video card, without technically overclocking.
Flashing refers to using the firmware of a different card with the same core and compatible firmware, effectively making it a higher model card; it can be difficult, and may be irreversible. Sometimes standalone software to modify the firmware files can be found, e.g. NiBiTor (GeForce 6/7 series are well regarded in this aspect), without using firmware for a better model video card. For example, video cards with 3D accelerators (most, as of 2011[update]) have two voltage and clock rate settings, one for 2D and one for 3D, but were designed to operate with three voltage stages, the third being somewhere between the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability; the card can drop down to this clock rate, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwards return to the desired high performance clock and voltage settings).
Some cards have abilities not directly connected with overclocking. For example, Nvidia's GeForce 6600GT (AGP flavor) has a temperature monitor used internally by the card, invisible to the user if standard firmware is used. Modifying the firmware can display a 'Temperature' tab.
Unlocking refers to enabling extra pipelines or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only), Radeon X800 Pro VIVO were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but pipelines and shaders beyond those specified are disabled; the GPU may be fully functional, or may have been found to have faults which do not affect operation at the lower specification. GPUs found to be fully functional can be unlocked successfully, although it is not possible to be sure that there are undiscovered faults; in the wost case the card may become permanently unusable. Later generations of ATI and Nvidia disable additional pipelines by irreversible laser cutting to prevent this practice.[citation needed].
![]() |
The Wikibook How To Build A Computer has a page on the topic of |
![]() |
Wikimedia Commons has media related to: Overclocking |
Oc or OC may refer to:
The Oregon and California Railroad was formed from the Oregon Central Railroad when it was the first to operate a 20-mile (32 km) stretch south of Portland in 1869. This qualified the Railroad for land grants in California, whereupon the name of the railroad soon changed to Oregon & California Rail Road Company. In 1887, the line was completed over Siskiyou Summit, and the Southern Pacific Railroad assumed control of the railroad, although it was not officially sold to Southern Pacific until January 3, 1927.
As part of the U.S. government's desire to foster settlement and economic development in the western states, in July 1866, Congress passed the Oregon and California Railroad Act, which made 3,700,000 acres (1,500,000 ha) of land available for a company that built a railroad from Portland, Oregon to San Francisco, distributed by the state of Oregon in 12,800-acre (5,200 ha) land grants for each mile of track completed. Two companies, both of which named themselves the Oregon Central Railroad, began a competition to build the railroad, one on the west side of the Willamette River and one on the east side. The two lines would eventually merge and reorganize as the Oregon and California Railroad.