Calling The Model
Calling The Model
Now that we've instantiated a new model object from the OpenAI class and verified that
it is functional, it's time to pass in prompts!
We’ll start by creating an asynchronous function named promptFunc() inside
our server.js file and test it:
try {
return res;
catch (err) {
console.error(err);
throw(err);
}
};
// Test
Inside the promptFunc() function, we use a try/catch statement to help us catch any
potential errors that might arise.
Within the try portion of the try/catch we create a new variable, res, that holds the
returned value from the OpenAI .invoke() method, to which we've passed the test
question, "How do you capitalize all characters of a string in JavaScript?"
When we run the script using node server.js, it may take a moment, but the result
should be the answer to the question as well as an example!
Before moving on, remove the test call to the promptFunc() from server.js.
What if a user of our application wanted to ask a different coding question? Instead of
having the user go into the server.js and alter the question themselves, we'll need a
way to capture their input and make a call based on that input. To do this, we'll need
a POST route to help us out!
app.use(bodyParser.json());
Next, we define a POST route that will handle the user interaction. For ideas for this, we
refer back to the ChatGPT code we generated earlier and derive the following:
try {
} catch (error) {
console.error('Error:', error.message);
});
To the end of the file we also add the following, which tells Express to listen for requests
at the specified port number.
// Start the server
app.listen(port, () => {
});
require('dotenv').config();
app.use(bodyParser.json());
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0,
model: 'gpt-3.5-turbo'
});
try {
const res = await model.invoke(input);
return res;
catch (err) {
console.error(err);
throw(err);
};
try {
const userQuestion = req.body.question;
if (!userQuestion) {
console.log(result);
} catch (error) {
console.error('Error:', error.message);
});
app.listen(port, () => {
});
Now when we use node server.js to run our application, we are presented with the
message "Server is running on http://localhost:3000". Use Insomnia to verify that
the POST route at http://localhost:3000/ask is working as expected.
Moving forward, we'll learn more about LangChain, including how to use prompt
templates and output parsers to make our AI-powered summary generator application
more expandable and user-friendly!
Feel free to use the LangChain JavaScript documentation to learn more about
using LangChain specifically with JavaScript!