0% found this document useful (0 votes)
351 views20 pages

Demand Forecasting

Demand forecasting involves estimating future demand for products and services. It uses both informal methods like educated guesses as well as quantitative methods like analyzing historical sales data. Demand forecasting helps with pricing decisions, assessing future capacity needs, and determining whether to enter new markets. The most accurate methods combine forecasts from different techniques to reduce errors, though no single method can perfectly predict demand.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
351 views20 pages

Demand Forecasting

Demand forecasting involves estimating future demand for products and services. It uses both informal methods like educated guesses as well as quantitative methods like analyzing historical sales data. Demand forecasting helps with pricing decisions, assessing future capacity needs, and determining whether to enter new markets. The most accurate methods combine forecasts from different techniques to reduce errors, though no single method can perfectly predict demand.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase.

Demand forecasting involves techniques including both informal methods, such as educated guesses, and quantitative methods, such as the use of historical sales data or current data from test markets. Demand forecasting may be used in making pricing decisions, in assessing future capacity requirements, or in making decisions on whether to enter a new market.

Methods
No demand forecasting method is 100% accurate. Combined forecasts improve accuracy and reduce the likelihood of large errors. Reference class forecasting was developed to reduce error and increase accuracy in forecasting, including in demand forecasting. [2][3] Other experts have shown that rule-based forecasts produce more accurate results than combined forecasts.[4] . Reference class forecasting Reference class forecasting is the method of predicting the future, through looking at similar past situations and their outcomes. Reference class forecasting predicts the outcome of a planned action based on actual outcomes in a reference class of similar actions to that being forecast. The theories behind reference class forecasting were developed by Daniel Kahneman and Amos Tversky. The theoretical work helped Kahneman win the Nobel Prize in Economics. The methodology and data needed for employing reference class forecasting in practice in policy, planning, and management were developed by Oxford professor Bent Flyvbjerg and the COWI consulting group in a joint effort. Today, reference class forecasting has found widespread use in practice in both public and private sector policy and management. Kahneman and Tversky (1979a, b) found that human judgment is generally optimistic due to overconfidence and insufficient consideration of distributional information about outcomes. Therefore, people tend to underestimate the costs, completion times, and risks of planned actions, whereas they tend to overestimate the benefits of those same actions. Such error is caused by actors taking an "inside view," where focus is on the constituents of the specific planned action instead of on the actual outcomes of similar ventures that have already been completed. Kahneman and Tversky concluded that disregard of distributional information, that is, risk, is perhaps the major source of error in forecasting. On that basis they recommended that forecasters "should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available" (Kahneman and Tversky 1979b, p. 316). Using distributional information from previous ventures similar to the one being forecast is called taking an "outside view". Reference class forecasting is a method for taking an outside view on planned actions. Reference class forecasting for a specific project involves the following three steps: 1. Identify a reference class of past, similar projects. 2. Establish a probability distribution for the selected reference class for the parameter that is being forecast.

3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project. . Combined forecasts The method of taking a simple mean average of a panel of independent forecasts, derived from different forecasting methods, is known as combining forecasts and the result is often referred to as a consensus forecast. Unless a particular forecast model which produces smaller forecast errors compared to other individual forecasts can be identified, adopting the consensus approach can be beneficial due to diversification gains. Combining economic forecasts is well established in many countries and can count central banks, government institutions and businesses among the users. In recent decades, consensus forecasts have attracted much interest, backed by the publication of academic research on forecast accuracy. Empirical studies[1][2] show that pooling forecasts increased forecast accuracy. One of the advantages of using consensus forecasts is that it can prove useful if there is a high degree of uncertainty or risk attached to the situation and the selection of the most accurate forecast in advance is difficult. Even if one method is identified as the best, combining is still worthwhile if other methods can make some positive contribution to the forecast accuracy. Moreover, many factors can affect the independent forecast and these, along with any additional useful information, might be captured by using the consensus approach. Another argument in favour of this method is that individual forecasts may be subject to numerous behavioural biases, but these can be minimised by combining independent forecasts together. Hence, combining is seen as helping to improve forecast accuracy by reducing the forecast errors of individual forecasts. Furthermore, averaging forecasts is likely to be more useful when the data and the forecasting techniques that the component forecasts are drawn from differ substantially. And even though it is only a simple approach (typically an unweighted mean average), this method is just as useful as other more sophisticated models. Indeed, more recent studies in the past decade have shown that, over time, the equal weights combined forecast is usually more accurate than the individual forecast which make up the consensus.[3][4][5]

Methods that rely on qualitative assessment


Forecasting demand based on expert opinion. Some of the types in this method are, Unaided judgment Prediction market Delphi technique Game theory Judgmental bootstrapping Simulated interaction Intentions and expectations surveys Conjoint analysis jury of excecutive method

Methods that rely on quantitative data


Discrete Event Simulation Extrapolation Reference class forecasting

Quantitative analogies Rule-based forecasting Neural networks Data mining Causal models Segmentation

some of the other methods


a) time series projection methods this includes: moving average method exponential smoothing method trend projection methods

b) casual methods this includes: chain-ratio method consumption level method end use method

Methods that rely on qualitative assessment

1.Prediction markets (also known as predictive markets, information


markets, decision markets, idea futures, event derivatives, or virtual markets) are speculative markets created for the purpose of making predictions. The current market prices can then be interpreted as predictions of the probability of the event or the expected value of the parameter. For example, a prediction market security might reward a dollar if a particular candidate is elected, such that an individual who thinks the candidate had a 70% chance of being elected should be willing to pay up to 70 cents for such a security. People who buy low and sell high are rewarded for improving the market prediction, while those who buy high and sell low are punished for degrading the market prediction. Evidence so far suggests that prediction markets are at least as accurate as other institutions predicting the same events with a similar pool of participants. [1]

Eg:-

Public prediction markets


There are a number of commercial prediction markets. By far the largest is Betfair which had a valuation in the region of 1.5 billion GBP in 2010. [14] Others include, Intrade a for-profit company with a large variety of contracts not including sports. The Iowa Electronic Markets an academic market examining elections where positions are limited to $500, iPredict and TradeSports a prediction markets for sporting events. In addition there are a number of virtual prediction markets where purchases are made with virtual money, these include The simExchange, Hollywood Stock Exchange, NewsFutures, the Popular Science Predictions Exchange, Hubdub - closed 30 April 2010, Knew The News, Tahministan, The Industry Standard's technology industry prediction market, and the

Foresight Exchange Prediction Market. Bet2Give is a charity prediction market where real money is traded but ultimately all winnings are donated to the charity of the winner's choice.

Prediction Market Price Formats


The largest prediction market at present is Betfair although it would be extremely difficult to recognise it as a prediction market in the sense that the price formats, on the surface, look nothing like the probability pricing, or binary option [29], more closely associated with prediction markets. A prediction market might show a price of 38-40 for an event, which in effect means that somebody believes there is a 38% chance of the event taking place, while someone else considers the chance less than 40%. On Betfair the price of 40% is reflected as 2.50 (100/40) and these style of prices are generally seen on continental Europe. In the UK that same price is shown as a fraction 6/4, in itself derivable from the Betfair price minus one, i.e. 2.5 - 1.0 = 1.5 = 6/4. In the US that same price would be reflected as 150, in China as 1.50, etc. with Indonesia, Malaysia and Italy all having their own variations on a theme.

2.Delphi method
Delphi method is a structured communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of experts.[1] In the standard version, the experts answer questionnaires in two or more rounds. After each round, a facilitator provides an anonymous summary of the experts forecasts from the previous round as well as the reasons they provided for their judgments. Thus, experts are encouraged to revise their earlier answers in light of the replies of other members of their panel. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Finally, the process is stopped after a predefined stop criterion (e.g. number of rounds, achievement of consensus, stability of results) and the mean or median scores of the final rounds determine the results.[2] Other versions, such as the Policy Delphi,[3] have been designed for normative and explorative use, particularly in the area of social policy and public health. [4] In Europe, more recent web-based experiments have used the Delphi method as a communication technique for interactive decision-making and e-democracy.[5] Delphi is based on the principle that forecasts (or decisions) from a structured group of individuals are more accurate than those from unstructured groups. [6] This has been indicated with the term "collective intelligence".[7] The technique can also be adapted for use in face-toface meetings, and is then called mini-Delphi or Estimate-Talk-Estimate (ETE). Delphi has been widely used for business forecasting and has certain advantages over another structured forecasting approach, prediction markets.[8]

Key characteristics
The following key characteristics of the Delphi method help the participants to focus on the issues at hand and separate Delphi from other methodologies: [edit]Structuring

of information flow

The initial contributions from the experts are collected in the form of answers to questionnaires and their comments to these answers. The panel director controls the interactions among the participants by processing the information and filtering out irrelevant

content. This avoids the negative effects of face-to-face panel discussions and solves the usual problems of group dynamics. [edit]Regular

feedback

Participants comment on their own forecasts, the responses of others and on the progress of the panel as a whole. At any moment they can revise their earlier statements. While in regular group meetings participants tend to stick to previously stated opinions and often conform too much to group leader, the Delphi method prevents it. [edit]Anonymity

of the participants

Usually all participants remain anonymous. Their identity is not revealed, even after the completion of the final report. This prevents the authority, personality, or reputation of some participants from dominating others in the process. Arguably, it also frees participants (to some extent) from their personal biases, minimizes the "bandwagon effect" or "halo effect", allows free expression of opinions, encourages open critique, and facilitates admission of errors when revising earlier judgments.

Use in forecasting
First applications of the Delphi method were in the field of science and technology forecasting. The objective of the method was to combine expert opinions on likelihood and expected development time, of the particular technology, in a single indicator. One of the first such reports, prepared in 1964 by Gordon and Helmer, assessed the direction of long-term trends in science and technology development, covering such topics as scientific breakthroughs, population control, automation, space progress, war prevention and weapon systems. Other forecasts of technology were dealing with vehicle-highway systems, industrial robots, intelligent internet, broadband connections, and technology in education. Later the Delphi method was applied in other areas, especially those related to public policy issues, such as economic trends, health and education. It was also applied successfully and with high accuracy in business forecasting. For example, in one case reported by Basu and

Schroeder (1977), the Delphi method predicted the sales of a new product during the first two years with inaccuracy of 34% compared with actual sales. Quantitative methods produced errors of 1015%, and traditional unstructured forecast methods had errors of about 20%. The Delphi method has also been used as a tool to implement multi-stakeholder approaches for participative policy-making in developing countries. The governments of Latin America and the Caribbean have successfully used the Delphi method as an open-ended public-private sector approach to identify the most urgent challenges for their regional ICT-fordevelopment eLAC Action Plans.[11] As a result, governments have widely acknowledged the value of collective intelligence from civil society, academic and private sector participants of the Delphi, especially in a field of rapid change, such as technology policies. In this sense, the Delphi method can contribute to a general appreciation of participative policy-making.

Delphi vs. prediction markets


As can be seen from the Methodology Tree of Forecasting, Delphi has characteristics similar to prediction markets as both are structured approaches that aggregate diverse opinions from groups. Yet, there are differences that may be decisive for their relative applicability for different problems.[8] Some advantages of prediction markets derive from the possibility to provide incentives for participation. 1. They can motivate people to participate over a long period of time and to reveal their true beliefs. 2. They aggregate information automatically and instantly incorporate new information in the forecast. 3. Participants do not have to be selected and recruited manually by a facilitator. They themselves decide whether to participate if they think their private information is not yet incorporated in the forecast. Delphi seems to have these advantages over prediction markets: 1. Potentially quicker forecasts if experts are readily available.

3.Game theory
Game theory is a method of studying strategic decision making. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers."[1] An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory.[2] Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person's gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of class relations, and has developed into anumbrella term for the logical side of science, to include both human and non-humans, like computers. Classic uses include a sense of balance in numerous games, where each person has found or developed a tactic that cannot successfully better his results, given the other approach.

Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology.

Representation of games
See also: List of games in game theory The games studied in game theory are well-defined mathematical objects. A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games. [edit]Extensive

form

Main article: Extensive form game

An extensive form game

The extensive form can be used to formalize games with a time sequencing of moves. Games here are played on trees (as pictured to the left). Here eachvertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of adecision tree. (Fudenberg & Tirole 1991, p. 67) In the game pictured to the left, there are two players. Player 1 moves first and chooses either F or U. Player 2 sees Player 1's move and then chooses A orR. Suppose that Player 1 chooses U and then Player 2 chooses A, then Player 1 gets 8 and Player 2 gets 2. The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e., the players do not know at which point they

Player 2 chooses Left Player 1 chooses Up Player 1 chooses Down

Player 2 chooses Right

4, 3 0, 0

1, 1 3, 4

Normal form or payoff matrix of a 2-player, 2-strategy game

are), or a closed line is drawn around them. (See example in the imperfect information section.)

[edit]Normal

form

Main article: Normal-form game The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and pay-offs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3. When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form. Every extensive-form game has an equivalent normal-form game, however the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical. (Leyton-Brown & Shoham 2008, p. 35) [edit]Characteristic

function form

Main article: Cooperative game In games that possess removable utility separate rewards are not given; rather, the characteristic function decides the payoff of each unity. The idea is that the unity that is 'empty', so to speak, does not receive a reward at all. The origin of this form is to be found in John von Neumann and Oskar Morgenstern's book; when looking at these instances, they guessed that when a union C appears, it works against the fraction (N/C) as if two individuals were playing a normal game. The balanced payoff of C is a basic function. Although there are differing examples that help determine coalitional amounts from normal games, not all appear that in their function form can be derived from such.

Formally, a characteristic function is seen as: (N,v), where N represents the group of people and v:2^N-->R is a normal utility. Such characteristic functions have expanded to describe games where there is no removable utility. [edit]Partition

function form

The characteristic function form ignores the possible externalities of coalition formation. In the partition function form the payoff of a coalition depends not only on its members, but also on the way the rest of the players are partitioned (Thrall & Lucas 1963).

General and applied uses


As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well. Game-theoretic analysis was initially used to study animal behavior by Ronald Fisher in the 1930s (although even Charles Darwin makes a few informal game-theoretic statements). This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his book Evolution and the Theory of Games. In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior.[8] In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic arguments of this type can be found as far back as Plato.[9] [edit]Description

and modeling

A three stage Centipede Game

The first known use is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has come under recent criticism. First, it is criticized because the assumptions made by game theorists are often violated. Game theorists may assume players always act in a way to directly maximize their wins (the Homo economicus model), but in practice, human behavior often deviates from this model. Explanations of this phenomenon are many; irrationality, new models of deliberation, or even different motives (like that of altruism). Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, additional criticism

of this use of game theory has been levied because some experiments have demonstrated that individuals do not play equilibrium strategies. For instance, in the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments. [10] Alternatively, some authors claim that Nash equilibria do not provide predictions for human populations, but rather provide an explanation for why populations that play Nash equilibria remain in that state. However, the question of how populations reach those points remains open. Some game theorists have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).

Prescriptive or normative analysis


On the other hand, some scholars see game theory Cooperate Defect not as a predictive tool for the behavior of human -1, -1 -10, 0 beings, but as a suggestion for how people ought to Cooperate behave. Since a Nash equilibrium of a game 0, -10 -5, -5 Defect constitutes one's best response to the actions of the other players, playing a strategy that is part of a Nash The Prisoner's Dilemma equilibrium seems appropriate. However, this use for game theory has also come under criticism. First, in some cases it is appropriate to play a non-equilibrium strategy if one expects others to play non-equilibrium strategies as well. For an example, see Second, the Prisoner's dilemma presents another potential counterexample. In the Prisoner's Dilemma, each player pursuing his own self-interest leads both players to be worse off than had they not pursued their own self-interests.

4.Conjoint analysis (marketing)


Conjoint analysis is a statistical technique used in market research to determine how people value different features that make up an individual product or service. The objective of conjoint analysis is to determine what combination of a limited number of attributes is most influential on respondent choice or decision making. A controlled set of potential products or services is shown to respondents and by analyzing how they make preferences between these products, the implicit valuation of the individual elements making up the product or service can be determined. These implicit valuations (utilities or part-worths) can be used to create market models that estimate market share, revenue and even profitability of new designs. Conjoint originated in mathematical psychology and was developed by marketing professor Paul Green at the University of Pennsylvania and Data Chan. Other prominent conjoint analysis pioneers include professor V. Seenu Srinivasan of Stanford University who developed a linear programming (LINMAP) procedure for rank ordered data as well as a selfexplicated approach, Richard Johnson (founder of Sawtooth Software) who developed the Adaptive Conjoint Analysis technique in the 1980s and Jordan Louviere (University of Iowa)

who invented and developed Choice-based approaches to conjoint analysis and related techniques such as MaxDiff. Today it is used in many of the social sciences and applied sciences including marketing, product management, and operations research. It is used frequently in testing customer acceptance of new product designs, in assessing the appeal of advertisements and in service design. It has been used in product positioning, but there are some who raise problems with this application of conjoint analysis (see disadvantages). Conjoint analysis techniques may also be referred to as multiattribute compositional modelling, discrete choice modelling, or stated preference research, and is part of a broader set of trade-off analysis tools used for systematic analysis of decisions. These tools include Brand-Price Trade-Off, Simalto, and mathematical approaches such as evolutionary algorithms or Rule Developing Experimentation.

Types of conjoint analysis


The earliest forms of conjoint analysis were what are known as Full Profile studies, in which a small set of attributes (typically 4 to 5) are used to create profiles that are shown to respondents, often on individual cards. Respondents then rank or rate these profiles. Using relatively simple dummy variable regression analysis the implicit utilities for the levels can be calculated. Two drawbacks were seen in these early designs. Firstly, the number of attributes in use was heavily restricted. With large numbers of attributes, the consideration task for respondents becomes too large and even with fractional factorial designs the number of profiles for evaluation can increase rapidly. In order to use more attributes (up to 30), hybrid conjoint techniques were developed. The main alternative was to do some form of self-explication before the conjoint tasks and some form of adaptive computer-aided choice over the profiles to be shown. The second drawback was that the task itself was unrealistic and did not link directly to behavioural theory. In real-life situations, the task would be some form of actual choice between alternatives rather than the more artificial ranking and rating originally used. Jordan Louviere pioneered an approach that used only a choice task which became the basic of choice-based conjoint and discrete choice analysis. This stated preference research is linked to econometric modeling and can be linked revealed preference where choice models are calibrated on the basis of real rather than survey data. Originally choice-based conjoint analysis was unable to provide individual level utilities as it aggregated choices across a market. This made it unsuitable for market segmentation studies. With newer hierarchical Bayesian analysis techniques, individual level utilities can be imputed back to provide individual level data.

Information collection
Data for conjoint analysis is most commonly gathered through a market research survey, although conjoint analysis can also be applied to a carefully designed configurator or data from an appropriately design test market experiment. Market research rules of thumb apply with regard to statistical sample size and accuracy when designing conjoint analysis interviews.

The length of the research questionnaire depends on the number of attributes to be assessed and the method of conjoint analysis in use. A typical Adaptive Conjoint questionnaire with 2025 attributes may take more than 30 minutes to complete. Choice based conjoint, by using a smaller profile set distributed across the sample as a whole may be completed in less than 15 minutes. Choice exercises may be displayed as a store front type layout or in some other simulated shopping environment. [edit]Analysis Any number of algorithms may be used to estimate utility functions. These utility functions indicate the perceived value of the feature and how sensitive consumer perceptions and preferences are to changes in product features. The actual mode of analysis will depend on the design of the task and profiles for respondents. For full profile tasks, linear regression may be appropriate, for choice based tasks, maximum likelihood estimation, usually with logistic regression are typically used. The original methods were monotonic analysis of variance or linear programming techniques, but these are largely obsolete in contemporary marketing research practice. In addition, hierarchical Bayesian procedures that operate on choice data may be used to estimate individual level utilities from more limited choice-based designs. [edit]Advantages estimates psychological tradeoffs that consumers make when evaluating several attributes together measures preferences at the individual level uncovers real or hidden drivers which may not be apparent to the respondent themselves realistic choice or shopping task able to use physical objects if appropriately designed, the ability to model interactions between attributes can be used to develop needs based segmentation

[edit]Disadvantages designing conjoint studies can be complex with too many options, respondents resort to simplification strategies difficult to use for product positioning research because there is no procedure for converting perceptions about actual features to perceptions about a reduced set of underlying features respondents are unable to articulate attitudes toward new categories, or may feel forced to think about issues they would otherwise not give much thought to poorly designed studies may over-value emotional/preference variables and undervalue concrete variables does not take into account the number items per purchase so it can give a poor reading of market share

Methods that rely on quantitative data

1.Extrapolation

Example illustration of the extrapolation problem, consisting of assigning a meaningful value at the blue box, at , given the red data points.

In mathematics, extrapolation is the process of constructing new data points. It is similar to the process of interpolation, which constructs new points between known points, but the results of extrapolations are often less meaningful, and are subject to greater uncertainty. It may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown a driver extrapolates road conditions beyond his sight while driving).
[1]

(e.g.

Extrapolation methods
A sound choice of which extrapolation method to apply relies on a prior knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods. [2] Crucial questions are for example if the data can be assumed to be continuous, smooth, possibly periodic etc. [edit]Linear

extrapolation

Extrapolation means creating a tangent line at the end of the known data and extending it beyond as that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the point and to be extrapolated are

, linear extrapolation gives the function:

(which is identical to linear interpolation if ). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction. [edit]Polynomial

extrapolation

A polynomial curve can be created through the entire known data or just near the end. The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data. High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values, an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon. [edit]Conic

extrapolation

A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, it will loop back and rejoin itself. A parabolic or hyperbolic curve will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer. [edit]French

curve extrapolation

French curve extrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors. [3] This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years [1]. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies.[4]

2.Reference class forecasting


Reference class forecasting is the method of predicting the future, through looking at similar past situations and their outcomes. Reference class forecasting predicts the outcome of a planned action based on actual outcomes in a reference class of similar actions to that being forecast. The theories behind reference class forecasting were developed by Daniel Kahneman and Amos Tversky. The theoretical work helped Kahneman win the Nobel Prize in Economics. The methodology and data needed for employing reference class forecasting in practice in policy, planning, and management were developed by Oxford professor Bent Flyvbjerg and the COWI consulting group in a joint effort. Today, reference class forecasting has found widespread use in practice in both public and private sector policy and management. Using distributional information from previous ventures similar to the one being forecast is called taking an "outside view". Reference class forecasting is a method for taking an outside view on planned actions. Reference class forecasting for a specific project involves the following three steps:

1. Identify a reference class of past, similar projects. 2. Establish a probability distribution for the selected reference class for the parameter that is being forecast. 3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

3.Neural network
Simplified view of a feedforward artificial neural network

The term neural network was traditionally used to refer to a network or circuit of biological neurons. [1] The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term has two distinct usages: 1. Biological neural networks are made up of real biological neurons that are connected or functionally related in a nervous system. In the field ofneuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis. Artificial neural networks are composed of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex: artificial neural network algorithms attempt to abstract this complexity and focus on what may hypothetically matter most from an information processing point of view. Good performance (e.g. as measured by good predictive ability, low generalization error), or performance mimicking animal or human error patterns, can then be used as one source of evidence towards supporting the hypothesis that the abstraction really captured something important from the point of view of information processing in the brain. Another incentive for these abstractions is to reduce the amount of computation required to simulate artificial neural networks, so as to allow one to experiment with larger networks and train them on larger data sets.

2.

This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate articles: biological neural network and artificial neural network.

4.Data mining
Data mining (the analysis step of the knowledge discovery in databases process, [1] or KDD), a relatively young and interdisciplinary field of computer science[2][3] is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems.[2] The overall goal of the data mining process is to extract knowledge from a data set in a human-understandable structure [2] and besides the raw analysis step involves database and data management aspects, data preprocessing, model andinference considerations, interestingness metrics, complexity considerations, post-processing of found structure, visualization and online updating.[2] The term is a buzzword, and is frequently misused to mean any form of large-scale data or information processing (collection, extraction, warehousing, analysis and statistics) but also generalized to any kind of computer decision support system including artificial intelligence, machine learning and business intelligence. In the proper use of the word, the key term is discovery, commonly defined as "detecting something new". Even the popular book "Data mining: Practical machine learning tools and techniques with Java" [4] (which covers mostly machine learning material) was originally to be named just "Practical machine learning", and the term "data mining" was only added for marketing reasons. [5] Often the more general terms "(large scale) data analysis" or "analytics" or when referring to actual methods, artificial intelligence and machine learning are more appropriate. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps. The related terms data dredging, data fishing and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.

Process
The knowledge discovery in databases (KDD) process is commonly defined with the stages (1) Selection (2) Preprocessing (3) Transformation (4) Data Mining (5) Interpretation/Evaluation.[1] It exists however in many variations of this theme such as the CRoss Industry Standard Process for Data Mining (CRISP-DM) which defines six phases: (1) Business Understanding, (2) Data Understanding, (3) Data Preparation, (4) Modeling, (5) Evaluation, and (6) Deployment or a simplified process such as (1) Pre-processing, (2) Data mining, and (3) Results validation.

Pre-processing
Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target dataset must be large enough to contain these patterns while remaining concise enough to be mined in an acceptable timeframe. A common source for data is a data mart or data warehouse. Preprocessing is essential to analyze the multivariate datasets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data. Data mining involves six common classes of tasks:[1] Anomaly detection (Outlier/change/deviation detection) The identification of unusual data records, that might be interesting or data errors and require further investigation. Association rule learning (Dependency modeling) Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis. Clustering is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data. Classification is the task of generalizing known structure to apply to new data. For example, an email program might attempt to classify an email as legitimate or spam. Regression Attempts to find a function which models the data with the least error. Summarization providing a more compact representation of the data set, including visualization and report generation.

5. Causal model
A causal model is an abstract model that describes the causal mechanisms of a system. The model must express more than correlation because correlation does not imply causation. Judea Pearl defines a causal model as an ordered triple , where U is a set of exogenous variables whose values are determined by factors outside the model; V is a set of endogenous variables whose values are determined by factors within the model; and E is a set of structural equations that express the value of each endogenous variable as a function of the values of the other variables in U and V.[1]

STATISTICAL METHODS
Statistical Methods: We shall now move from simple to complex set of methods of demand forecasting. Such methods are taken usually from statistics. As such, you may be quite familiar with some the statistical tools and techniques, as a part of quantitative methods for business decisions. (1) Time series analysis or trend method : Under this method, the time series data on the under forecast are used to fit a trend line or curve either graphically or through statistical method of Least Squares. The trend line is worked out by fitting a trend equation to time series data with the aid of an estimation method. The trend equation could take either a linear or any kind of non-linear form. The trend method outlined above often yields a dependable forecast. The advantage in this method is that it does not require the formal knowledge of economic theory and the market, it only needs the time series data. The only limitation in this method is that it assumes that the past is repeated in future. Also, it is an appropriate method for long-run forecasts, but inappropriate for short-run forecasts. Sometimes the time series analysis may not reveal a significant trend of any kind. In that case, the moving average method or exponentially weighted moving average method is used to smoothen the series. (2) Barometric Techniques or Lead-Lag indicators method : This consists in discovering a set of series of some variables which exhibit a close association in their movement over a period or time. For example, it shows the movement of agricultural income (AY series) and the sale of tractors (ST series). The movement of AY is similar to that of ST, but the movement in ST takes place after a years time lag compared to the movement in AY. Thus if one knows the direction of the movement in agriculture income (AY), one can predict the direction of movement of tractors sale (ST) for the next year. Thus agricultural income (AY) may be used as a barometer (a leading indicator) to help the short-term forecast for the sale of tractors. Generally, this barometric method has been used in some of the developed countries for predicting business cycles situation. For this purpose, some countries construct what are known as diffusion indices by combining the movement of a number of leading series in the economy so that turning points in business activity could be discovered well in advance. Some of the limitations of this method may be noted however. The leading indicator method does not tell you anything about the magnitude of the change that can be expected in the lagging series, but only the direction of change. Also, the lead period itself may change overtime. Through our estimation we may find out the best-fitted lag period on the past data, but the same may not be true for the future. Finally, it may not be always possible to find out the leading, lagging or coincident indicators of the variable for which a demand forecast is being attempted. 3) Correlation and Regression: These involve the use of econometric methods to determine the nature and degree of association between/among a set of variables. Econometrics, you may recall, is the use of economic theory, statistical analysis and mathematical functions to determine the relationship between a dependent variable (say, sales) and one or more independent variables (like price, income, advertisement etc.). The relationship may be expressed in the form of a demand function, as we have seen earlier. Such relationships, based on past data can be used for forecasting. The analysis can be carried with varying degrees of complexity. Here we shall not get into the methods of finding out correlation coefficient or regression equation; you must have covered those statistical techniques as a

part of quantitative methods. Similarly, we shall not go into the question of economic theory. We shall concentrate simply on the use of these econometric techniques in forecasting. We are on the realm of multiple regression and multiple correlation. The form of the equation may be: DX = a + b1 A + b2PX + b3Py You know that the regression coefficients b 1, b2, b3 and b4 are the components of relevant elasticity of demand. For example, b 1 is a component of price elasticity of demand. The reflect the direction as well as proportion of change in demand for x as a result of a change in any of its explanatory variables. For example, b 2< 0 suggest that DX and PX are inversely related; b4 > 0 suggest that x and y are substitutes; b 3 > 0 suggest that x is a normal commodity with commodity with positive income-effect. Given the estimated value of and b i, you may forecast the expected sales (D X), if you know the future values of explanatory variables like own price (P X), related price (Py), income (B) and advertisement (A). Lastly, you may also recall that the statistics R2 (Co-efficient of determination) gives the measure of goodness of fit. The closer it is to unity, the better is the fit, and that way you get a more reliable forecast. The principle advantage of this method is that it is prescriptive as well descriptive. That is, besides generating demand forecast, it explains why the demand is what it is. In other words, this technique has got both explanatory and predictive value. The regression method is neither mechanistic like the trend method nor subjective like the opinion poll method. In this method of forecasting, you may use not only time-series data but also cross section data. The only precaution you need to take is that data analysis should be based on the logic of economic theory. (4) Simultaneous Equations Method: Here is a very sophisticated method of forecasting. It is also known as the complete system approach or econometric model building. In your earlier units, we have made reference to such econometric models. Presently we do not intend to get into the details of this method because it is a subject by itself. Moreover, this method is normally used in macro-level forecasting for the economy as a whole; in this course, our focus is limited to micro elements only. Of course, you, as corporate managers, should know the basic elements in such an approach. The method is indeed very complicated. However, in the days of computer, when package programmes are available, this method can be used easily to derive meaningful forecasts. The principle advantage in this method is that the forecaster needs to estimate the future values of only the exogenous variables unlike the regression method where he has to predict the future values of all, endogenous and exogenous variables affecting the variable under forecast. The values of exogenous variables are easier to predict than those of the endogenous variables. However, such econometric models have limitations, similar to that of regression method.

You might also like