ML-AI - Technical Coding Evaluation
ML-AI - Technical Coding Evaluation
Optimization
Overview
In an airline call center, agents handle a wide range of customer inquiries—from flight
cancellations and reschedules to refunds and complaints. These conversations produce
valuable data that can be leveraged through Generative AI. By integrating services like AWS
Bedrock or OpenAI or Together AI models, the system can:
This approach should enhance customer satisfaction, reduce call handling times, and
streamline agent performance.
Problem Statement
You are building a system where two AI agents collaborate to answer user queries about
airline flights:
1. Info Agent
2. QA Agent
Receives user queries (e.g., “What time does Flight 123 depart?”).
Functions to Implement
{
"flight_number": "AI123",
"departure_time": "08:00 AM",
"destination": "Delhi",
"status": "Delayed"
}
Extracts the flight number from the query (e.g., “Flight 123”).
{
"answer": "Flight AI123 departs at 08:00 AM to Delhi. Current status: Delayed."
}
The output must strictly follow JSON format—no plain text or extra commentary.
Test Cases:
Function Call Expected Output
Problem Statement
You have a small dataset of customer feedback from an airline. Each row contains a text
snippet and a binary label: positive or negative.
Your task:
"The flight was on time, and the staff was friendly." positive
Functions to Implement
Test Cases
Function Call Expected Output
API keys for OpenAI, AWS Bedrock, or Together AI (depending on your choice of
LLM provider)
1. Using OpenAI
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is AI?"}]
)
print(response["choices"][0]["message"]["content"])
4. Generate and download the Access Key ID and Secret Access Key.
bedrock = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1", # Change region if needed
aws_access_key_id="your_aws_access_key",
aws_secret_access_key="your_aws_secret_key"
)
response = bedrock.invoke_model(
body='{"prompt": "What is AI?", "max_tokens": 100}',
modelId="anthropic.claude-v2"
)
print(response["body"].read().decode("utf-8"))
```
3. Using Together AI
os.environ["TOGETHER_API_KEY"] = "your_together_api_key"
response = together.ChatCompletion.create(
model="together/gpt-neoxt-20b",
messages=[{"role": "user", "content": "What is AI?"}]
)
print(response["choices"][0]["message"]["content"])
2. Language & Libraries
Use Python.
3. Project Structure
Suggested file structure:
4. Testing
5. Documentation
o Installation
6. Time Constraints
o Hyperparameter tuning
7. Submission Instructions
1. Project Structure
b. Each folder must contain a README.md with instructions to run the code.
2. API Keys
a. Zip the project folder while excluding virtual environments and cache files.
b. The final ZIP file should contain all folders with their respective README.md,
requirements.txt, and scripts.
c. Add the Zip file to your drive and give access to whoever can
access the link.
d. Submit the link to google form given
e. Google Form for Submission:
https://docs.google.com/forms/d/e/1FAIpQLSc8E8Sh32CeKFDr
N82DouvMh1DLimzWgiTW_VtmJAmlziophw/viewform?
usp=header
Example Structure:
submission.zip/
│── problem1/
│ ├── main.py
│ ├── api_keys.env
│ ├── requirements.txt
│ ├── README.md
│── problem2/
│ ├── main.py
│ ├── api_keys.env
│ ├── requirements.txt
│ ├── README.md
All the very best