0% found this document useful (0 votes)
11 views25 pages

Maths Project FINAL

maths project FINAL

Uploaded by

adityakg777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views25 pages

Maths Project FINAL

maths project FINAL

Uploaded by

adityakg777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Page |1

Page |2
Page |3

INVERSE
TRIGONOMETRY
Page |4

INTRODUCTION
Inverse trigonometry, an essential branch of mathematics, evolved through the
contributions of several ancient civilizations and brilliant mathematicians over
centuries.

The origins of trigonometry can be traced back to ancient civilizations, where it was
developed to understand astronomical phenomena and to solve problems in
geometry.

Hipparchus (circa 190-120 BC), the "father of trigonometry," created the first
trigonometric tables, crucial for early astronomical calculations. Ptolemy (circa 100-
170 AD) expanded these methods in his work, the Almagest. Among the Indian
mathematicians, Aryabhata (476-550 AD) introduced the sine function, and
Brahmagupta (598-668 AD) advanced trigonometric methods, setting the stage for
understanding inverse functions. Islamic Scholars like Al-Battani (858-929 AD) and
others built on Greek and Indian work, introducing new trigonometric identities and
methods that involved inverse trigonometric concepts.

The explicit formulation of inverse functions emerged in the 17th century, with
contributions from John Wallis (1616-1703) and Isaac Newton (1642-1727).
Ultimately, Leonhard Euler (1707-1783) formalized inverse trigonometric functions
and introduced the notations used today.

An inverse function essentially reverses the operation of a given function. If a


function f takes an input x and gives an output y the inverse function takes y as
an input and returns x. In mathematical notation, if f(x) = y, then (y) = x.. This
concept is crucial for solving equations where the output of a function needs to be
traced back to its input

Inverse functions are not limited to trigonometry. Some common examples include:

 Inverse Linear Function: If f(x) = mx + b, then (y) = Inverse


 Exponential Function: For f(x) = the inverse is the natural logarithm
 (y)= ln(y)
 Inverse Quadratic Function: If f(x) = (restricted to non-negative x ), then
(y)=√
Page |5

Definition of Inverse Trigonometry


Inverse trigonometric functions are defined as the inverse
functions of the basic trigonometric functions, which are sine,
cosine, tangent, cotangent, secant and cosecant functions. These
inverse functions in trigonometry are used to get the angle with
any of the trigonometry ratios.

They are also termed arcus functions, ant trigonometric


functions or cyclometric functions since, for a given value of
trigonometric functions, they produce the length of arc needed to
obtain that particular value.

The inverse trigonometric functions perform the opposite


operation of the trigonometric functions such as sine, cosine,
tangent, cosecant, secant and cotangent. We know that
trigonometric functions are applicable, especially to the right-
angle triangle. These six important functions are used to find the
angle measure in the right-angle triangle when two sides of the
triangle measures are known.

The inverse trigonometry functions have major applications in


the field of engineering, physics, geometry and navigation
Page |6

Restricted Ranges
Inverse of a function ‘f ’ exists, if the function is one-one and
onto, i.e, bijective. Since trigonometric functions are many-one
over their domains, we restrict their domains and co-domains in
order to make them one-one and onto and then find their
inverse. The domains and ranges (principal value branches) of
inverse trigonometric functions are given below:

S.No. Function Domain Range

I. y= x -1 ≤ x ≤ 1 ≤ ≤

y= x 1≤x≤1 ≤ ≤
II.

III. y= x All real numbers

IV. y= x All real numbers 0<y<

1≤x and - <x ≤


V. y= x
≤ -1

≤ ≤
1≤x and - <x
VI. y= x y
≤ -1
Page |7

Principle values of inverse trigonometric functions


The solution in which the absolute value of the angle is the least
is called the principal solution. For example the value of Cos 0° is
1, the value of Cos 2 4 ..... is also 1. The smallest numerical
value, either positive or negative of an inverse trigonometric
functions is called the principal value of the function.Thus, the
principal values of x, x, x are the angles that
lie between and the principal values of x, x
and x are the angles that lie between 0 and

Functions Domain Principal value For x≥ 0 For x<0

y= x -1 ≤ x ≤ 1 ≤ ≤ ≤ ≤ ≤

y= x 1≤x≤1 ≤ ≤ ≤ ≤ ≤

y= x All real numbers ≤

y= x All real numbers 0<y< ≤

1≤x and ≤
y= x ≤
- x ≤ -1
≤ ≤
1≤x and
y= x y ≤ ≤
- x ≤ -1
Page |8

Representation of Principle value of x using a unit


circle

Graphical representation of
Page |9

Representation of Principle Value of x using a unit


circle
P a g e | 10

Formulas related with Inverse Trigonometry


P a g e | 11

CONCLUSION
Inverse function is valid if the function is One-to-One (bijective)
function. x, and can be defined for a
restricted domain. These domains are the value of x for which
Sine, Cosine, tangent mappings are One-to-One. All functions
have an inverse function that reverses their effect. Applying
function acting on the Variable x results in the two 'cancelling’
each other, leaving just the variable x.
P a g e | 12
P a g e | 13

INTRODUCTION
Linear programming (LP) is an advanced optimization technique within the field of
operations research, formulated to determine the best possible outcome—such as
maximizing profit or minimizing costs—under a given set of linear constraints.
Originating in the 1930s and 1940s, linear programming was pioneered by
mathematician George Dantzig, who introduced the Simplex method in 1947. This
groundbreaking algorithm revolutionized the ability to solve large-scale linear
optimization problems by efficiently navigating the vertices of the feasible region,
defined by the linear constraints, to find the optimal solution.

The foundation of linear programming lies in its formulation, which includes an


objective function, decision variables, and constraints. The objective function is a
linear equation that represents the goal to be achieved. Decision variables are the
unknowns to be determined, representing choices available to the decision-maker.
Constraints are linear equations or inequalities that encapsulate the limitations or
requirements, such as resource availability or production capacities.

The feasible region, formed by these constraints, is a convex polytope within which
the optimal solution lies. The Simplex method, an iterative algorithm, systematically
moves from one vertex of this polytope to another, improving the value of the
objective function at each step, until the best possible solution is identified.

The term “linear programming” consists of two words as linear and programming.
The word “linear” defines the relationship between multiple variables with degree
one. The word “programming” defines the process of selecting the best solution from
various alternatives.

Today, linear programming continues to be a vital tool in decision-making processes,


helping to solve complex problems related to resource allocation, production
planning, transportation, diet optimization, and financial portfolio design. Its
development has also paved the way for further advancements in optimization
techniques, including integer programming, nonlinear programming, and dynamic
programming, solidifying its place as a cornerstone in the study and application of
operations research and management science.
P a g e | 14

Definition of Linear Programming Problems(LPP)


Linear programming problems are a class of optimization issues where
the goal is to achieve the best possible outcome, such as maximizing
profit or minimizing cost, subject to a set of linear constraints.

The main aim of the linear programming problem is to find the optimal
solution.

Linear programming is the method of considering different inequalities


relevant to a situation and calculating the best value that is required to be
obtained in those conditions. Some of the assumptions taken while
working with linear programming are:

 The number of constraints should be expressed in the quantitative


terms
 The relationship between the constraints and the objective function
should be linear
 The linear function (i.e., objective function) is to be optimized

Components of Linear Programming

1. Decision Variables
Definition: Decision variables are the unknowns that we aim to solve for in
a linear programming problem. They represent the choices available to
the decision-maker.

Characteristics:

 Continuous or Discrete: In standard LP, decision variables are typically


continuous, meaning they can take any non-negative real values. In
integer programming, they must be whole numbers.
P a g e | 15

 Non-Negativity: Decision variables are often required to be non-


negative, reflecting real-world quantities like products, resources, or
time.

Example: In a manufacturing problem:

 could represent the number of units of product A to produce.


 could represent the number of units of product B to produce.

2. Objective Function
Definition: The objective function is a linear equation that needs to be
maximized or minimized. It represents the goal of the optimization, such
as maximizing profit or minimizing cost.

Form: Z=

, …. constraints

, … = decision variables

Characteristics:

 Linear Combination: It is a linear combination of decision variables,


each multiplied by a coefficient representing its contribution to the
objective.
 Single Objective: LP deals with a single objective function, although
techniques exist for handling multiple objectives.

Example: In a profit maximization problem:

Maximize Z=40 +30


P a g e | 16

3. Constraints
Definition: Constraints are linear inequalities or equations that represent
the limitations or requirements of the problem. They restrict the values
that the decision variables can take.

Characteristics:

 Linear Equations/Inequalities: Constraints must be linear.


 Feasibility Region: The set of all possible solutions that satisfy the
constraints forms the feasible region.

Example: In a resource allocation problem:

2 ≤ 100 ( machine hours)

3 2 ≤ 0 (labour hours)

4. Non-Negativity Restriction
Definition: This condition ensures that the decision variables cannot take
negative values. This is often a realistic assumption since negative
quantities of resources or products are not practical.

Form: ≥0

Characteristics:

 Practicality: Ensures solutions are realistic and applicable in real-


world scenarios.
 Feasibility: Helps define the feasible region by limiting the decision
variables to non-negative values.
P a g e | 17

5. FEASIBLE REGION
The common region determined by all the constraints of LPP is called
feasible region or it is the set of all possible points of optimization
problem that satisfy the problem's constraints, potentially including
inequalities, equalities and integer.

6. FEASIBLE SOLUTION
The point in the feasible Region is feasible solution. It is a set of values for
the decision variables that satisfies all of the constraints in an
optimization problem or It is the optimal Solution to a linear programming
problem with the largest objective function value (for a maximization
problem).

Solving Linear Programming Problems


Linear programming problems can be solved using various methods such
as:

1. Graphical Method: This is useful for problems with two decision


variables. The feasible region is plotted, and the optimal solution is
found at one of the vertices of this region.
2. Simplex Method: This is an iterative algorithm used for larger
problems. It starts at a feasible solution and moves towards the
optimal solution by improving the objective function value at each step.
3. Dual Simplex Method: A variant of the simplex method used when the
initial solution is not feasible but the optimal solution is required
quickly.
4. Interior-Point Methods: These methods move through the interior of the
feasible region rather than along the edges, and are often more
efficient for very large problems.
P a g e | 18

Linear Programming Simplex Method


The simplex method is one of the most popular methods to solve linear
programming problems. It is an iterative process to get the feasible
optimal solution. In this method, the value of the basic variable keeps
transforming to obtain the maximum value for the objective function. The
algorithm for linear programming simplex method is provided below:

Step 1: Establish a given problem. (i.e.,) write the inequality constraints


and objective function.

Step 2: Convert the given inequalities to equations by adding the slack


variable to each inequality expression.

Step 3: Create the initial simplex tableau. Write the objective function at
the bottom row. Here, each inequality constraint appears in its own row.
Now, we can represent the problem in the form of an augmented matrix,
which is called the initial simplex tableau.

Step 4: Identify the greatest negative entry in the bottom row, which helps
to identify the pivot column. The greatest negative entry in the bottom row
defines the largest coefficient in the objective function, which will help us
to increase the value of the objective function as fastest as possible.

Step 5: Compute the quotients. To calculate the quotient, we need to divide


the entries in the far right column by the entries in the first column,
excluding the bottom row. The smallest quotient identifies the row. The
row identified in this step and the element identified in the step will be
taken as the pivot element.

Step 6: Carry out pivoting to make all other entries in column is zero.

Step 7: If there are no negative entries in the bottom row, end the process.
Otherwise, start from step 4.

Step 8: Finally, determine the solution associated with the final simplex
tableau.
P a g e | 19

Graphical Method
The graphical method is used to optimize the two-variable linear
programming. If the problem has two decision variables, a graphical
method is the best method to find the optimal solution. In this method, the
set of inequalities are subjected to constraints. Then the inequalities are
plotted in the XY plane. Once, all the inequalities are plotted in the XY
graph, the intersecting region will help to decide the feasible region. The
feasible region will provide the optimal solution as well as explains what
all values our model can take

Detailed Example of a Linear Programming Problem

Problem Statement

A company produces two products, A and B. The profit per unit of product
A is $40 and for product B is $30. The production process involves two
resources: machine hours and labor hours.

 Product A requires 2 machine hours and 3 labor hours per unit.


 Product B requires 1 machine hour and 2 labor hours per unit.
 The company has 100 machine hours and 90 labor hours available per
week.

Solution

Decision Variables

 : Number of units of product A to produce.


 : Number of units of product B to produce.

Objective Function

Maximize the total profit : Z=40 +30


P a g e | 20

Constraints

Machine hour’s constraint: 2 ≤ 100

Labor hour’s constraint: 3 2 ≤ 0

Non-negativity constraints: ≥0 & ≥0

Graph of the solution


P a g e | 21

Plot the Constraints

Constraint 1: 2 100

This line is plotted by finding for a range of .

Constraint 2: 3 2 0

This line is plotted by finding for a range of .

Identify the Feasible Region

The feasible region is the area where both constraints are satisfied
simultaneously. It is the shaded area in the plot.

Identify the Vertices of the Feasible Region

The vertices of the feasible region, where the constraint lines intersect or
meet the axes, are:

1. (0, 45)
2. (25, 50)
3. (40, 20)
4. (50, 0)

Evaluate the Objective Function at Each Vertex

To find the optimal solution, evaluate the objective function Z=40 +30

at each vertex:

1. At (0, 45): Z=40(0)+30(45)=1350


2. At (25, 50): Z=40(25)+30(50)=1000+1500=2500
3. At (40, 20): Z=40(40)+30(20)=1600+600=2200
4. At (50, 0): Z=40(50)+30(0)=2000

The maximum value of ZZZ is 2500 at the vertex (25, 50). Therefore, the
optimal solution is to produce 25 units of product A and 50 units of
product B to achieve the maximum profit of $2500.
P a g e | 22

Applications of Linear Programming:

1. Manufacturing

Production Planning: Determining the optimal mix of products to


manufacture in order to maximize profit or minimize costs while
considering constraints such as labor, materials, and production capacity.

Inventory Management: Optimizing the levels of inventory to minimize


holding and shortage costs while meeting demand.

2. Transportation and Logistics


Routing: Finding the most efficient routes for transportation to minimize
costs or time, known as the Transportation Problem.

Scheduling: Optimizing schedules for shipping and delivery to ensure


timely delivery while minimizing costs.

Fleet Management: Allocating vehicles to different routes or tasks in an


optimal manner.

3. Finance
Portfolio Optimization: Allocating investments across different assets to
maximize return or minimize risk, considering constraints like budget, risk
tolerance, and regulatory requirements.

Loan and Credit Management: Optimizing the allocation of loans and


credits to maximize returns or minimize risks.
P a g e | 23

4. Operations Research
Resource Allocation: Distributing limited resources (such as workforce,
machinery, or budget) across various projects or departments to achieve
the best possible outcome.

Supply Chain Optimization: Managing the flow of goods, information, and


finances in a supply chain to minimize costs and improve efficiency.

5. Energy Sector
Power Generation: Optimizing the mix of energy sources (e.g., coal, gas,
renewable) to meet demand at minimum cost while considering
environmental regulations.

Load Balancing: Distributing electricity in a grid to ensure stable supply


and minimize transmission losses.

6. Telecommunications
Network Design: Designing communication networks to minimize costs
and maximize coverage or data flow.

Bandwidth Allocation: Optimizing the distribution of bandwidth among


users or applications to ensure efficient use and quality of service.

7. Agriculture
Crop Planning: Determining the optimal mix of crops to plant to maximize
yield or profit while considering constraints like land, water, labor, and
market demand.

Livestock Management: Optimizing feeding and breeding schedules to


maximize productivity and minimize costs.
P a g e | 24

8. Healthcare
Staff Scheduling: Allocating medical staff to shifts and tasks to ensure
adequate coverage and minimize costs.

Resource Allocation: Distributing medical resources (e.g., beds,


equipment, medication) to maximize patient care and minimize costs.

Example Application: Diet Problem

One classic example of LP in action is the "Diet Problem," which involves


determining the optimal combination of foods to meet nutritional
requirements at the minimum cost. Here’s a brief outline:
P a g e | 25

Conclusion

Linear programming stands as a cornerstone of optimization techniques,


offering a robust framework for identifying the best possible outcomes
within mathematical models characterized by linear relationships. By
optimizing a linear objective function while adhering to a set of linear
constraints, it provides a systematic approach to decision-making and
resource allocation. This method has broad applicability across numerous
domains including economics, business, engineering, logistics, and
military strategy, effectively solving problems related to production
scheduling, transportation, network flows, and financial planning.

The strength of linear programming lies in its ability to simplify complex


problems, enabling efficient and practical solutions. Its widespread use is
further bolstered by the availability of sophisticated algorithms and
software tools, which have significantly enhanced its accessibility and
implementation. As industries and technologies continue to evolve, the
relevance and utility of linear programming remain profound, ensuring it
remains a vital tool in the arsenal of modern problem-solving and
optimization strategies.

You might also like