0% found this document useful (0 votes)
50 views5 pages

AI Software Intern - Internship Task Document

The document outlines an internship role for an AI Intern at Wasserstoff, focusing on the development of Generative AI applications over a six-month period. The intern will work on creating a guessing game that integrates AI, requiring skills in Python, system design, and database knowledge. Deliverables include source code, Docker assets, a README, and a tech report, with specific judging criteria for functionality, AI integration, performance, deployment, frontend UX, and code quality.

Uploaded by

WITCHER
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views5 pages

AI Software Intern - Internship Task Document

The document outlines an internship role for an AI Intern at Wasserstoff, focusing on the development of Generative AI applications over a six-month period. The intern will work on creating a guessing game that integrates AI, requiring skills in Python, system design, and database knowledge. Deliverables include source code, Docker assets, a README, and a tech report, with specific judging criteria for functionality, AI integration, performance, deployment, frontend UX, and code quality.

Uploaded by

WITCHER
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

AI Software Intern – Internship Task

Document
Internship Role Overview
As an AI Intern for 6 months (full-time), you will engage in research-driven development
of Generative AI applications. The internship emphasizes both academic research and
hands-on implementation, contributing to real product development, exploring research
papers, and building internal tools.

● Company: Wasserstoff

● Role: AI Intern (Generative AI) – Full-Time, 6 Month Internship, Work From


Office(Gurugram)

● Focus: Research-based implementation of Generative AI tools and applications


(mix of research and real product development)

● Required Skills: Python; Transformers and LLM architecture; LangChain


framework; OpenAI API usage; system design fundamentals; basic database
knowledge (SQL/NoSQL)

Internship Task: Generative AI Backend & Interactive


Guessing Game
Objective

Build a minimum‑viable clone of the “What Beats Rock” concept that demonstrates
your ability to wire Generative AI into a production‑ready backend, apply caching &
data‑structures, and ship a one‑click Docker deployment. The frontend only needs to be
functional and lightly animated; the technical depth must live in the backend and infra.
Task Details

1. Core Game Logic

○ A round starts with a seed word (e.g., Rock). Users type something they
think "beats" it.

○ Forward the guess to a GenAI endpoint with a well‑crafted prompt asking


whether guess X beats word Y.

○ If the AI says YES ➜ add the guess to a linked‑list (or equivalent) that
stores every distinct answer in order, increment the user’s score, and


respond with:
Nice! “Paper” beats “Rock”. Paper has been guessed
3 times before.

○ If the guess already exists in the linked list, immediately return


Game Over.

○ At any point the user can request a history of their own guesses (traverse
the list).

2. Global Guess Counter

○ Persist a per‑answer counter ("Paper → 42 total guesses so far") in your


database. Expose it in the response as above.

3. Caching Layer

○ Cache (input‑pair → AI verdict) so identical validations never hit the LLM


twice.

4. Concurrency & Rate Limits

○ Handle ≥100 simultaneous players without crashing. Use async I/O,


connection pooling, and sensible per‑IP rate limits.

5. Moderation & Persona


○ Filter profanity / disallowed content before sending it to the LLM.

○ Expose at least two host personas (e.g., Serious vs Cheery )


selectable via query param or header.

6. Frontend (lightweight)
○ Single‑page HTML or tiny React app.

○ Textbox for guesses, animated emoji feedback (confetti or on correct, on


Game Over).

○ Display current score, last five guesses, and the global guess‑count for the
most recent answer.

7. Deployment

○ Provide a Dockerfile and docker‑compose.yml that spin up API, DB, and


Redis with one click in Docker Desktop.

○ A bonus point if you additionally live‑deploy to a free tier (Render, Railway,


Fly.io) and include the URL.

Requirements

● Backend: Python + FastAPI or Node.js

● Database: PostgreSQL or MongoDB (use Docker image).

● Cache: Redis (for verdict & duplicate‑check caching).

● AI Provider: Any GenAI text endpoint reachable via REST; use env vars for keys.

● Data‑structure: Maintain a linked‑list (in memory or DB) for each game session
plus a global table/collection for aggregate counts.

● Testing: Minimum one end‑to‑end test covering duplicate‑guess game‑over path.


Deliverables
1. Source Code – backend + minimal frontend in a github repository. 2.

Docker Assets – Dockerfile, docker‑compose.yml, sample .env.


3. README – < 1 000 words: setup, how to play, architectural choices, prompt
design.

4. Tech Report (1 page) – caching strategy, linked‑list implementation, concurrency


handling metrics, the game logic,and a new feature you can implement.

Judging Criteria

Weight Category What We Look For

30 % Functionality Correct AI validation,


duplicate‑detection, global counters

25 % AI Integration Prompt quality, error resilience, cost/memory


awareness

15 % Performance Response latency, cache hit‑rate, load test results

10 % Deployment One‑click Docker success, bonus live URL

10 % Frontend UX Clear feedback, emoji/animation polish 10 % Code

& Docs Clean structure, comments, README

Recommended Folder Structure


genai-intern-game/
├── backend/
│ ├── api/ # FastAPI / Express routes
│ ├── core/
│ │ ├── ai_client.py # talks to the LLM
│ │ ├── cache.py # Redis helpers
│ │ ├── game_logic.py # linked list + validation
│ │ └── moderation.py # profanity / policy checks
│ ├── db/
│ │ ├── models.py
│ │ └── migrations/
│ └── main.py # app entry-point
├── frontend/
│ ├── index.html
│ ├── styles.css
│ └── app.js # minimal JS / React
├── tests/
│ └── e2e_duplicate_test.py
├── Dockerfile
├── docker-compose.yml
├── requirements.txt # or package.json if Node
└── README.md

Things to Keep in Mind

● Prompt <= 20 tokens whenever possible – keep it cheap.

● Handle LLM rate‑limits/back‑offs gracefully.

● Make sure the linked list can’t grow unbounded in RAM – persist or prune.

● Document how to reset the game state during local testing.

● Stick to semantic versioning and clear commit messages.


● Make sure the responses by LLM (Host) are creative and do not repeat in a
single run.
Contact
For any questions during the internship or to submit your deliverables, please reach out
to: Divyansh Sharma – Wasserstoff
Email: [email protected]

You might also like