0% found this document useful (0 votes)
3 views5 pages

StructuredOutputParser Object

Uploaded by

wibaomiab273
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views5 pages

StructuredOutputParser Object

Uploaded by

wibaomiab273
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

StructuredOutputParser Object

One of LangChain's output parsers is the StructuredOutputParser class and will be the
one we use in the application. There are two main components to LangChain output
parsers: formatting instructions and the parsing method.
The formatting instructions play a pivotal role, allowing us to construct the exact JSON
that will be returned as the response. Additionally, we can use prompts to define the
values of the data that is being returned dynamically using OpenAI. For instance, not
only can we pass back the response, we can also ask OpenAI to provide additional
information such as a source, the date the source was last updated, or even the
response in another language! It's worth mentioning that the additional information is not
static. You can think of it as asking follow-up questions based on the response and
passed back to the user as a more completed dataset.
The .parse() method takes in the normal response from OpenAI as an argument and
structures it based on the formatting instructions.
Now that we have a high-level, conceptual understanding of an output parser, let's
implement it in our application!

Implementing StructuredOutputParser
To start, we'll need to require the StructuredOutputParser class:

const { StructuredOutputParser } =
require("langchain/output_parsers");

Next, we’ll instantiate a new object from the StructuredOutputParser class with some
additional properties. We can define the exact structure and properties we want
returned to the user with the .fromNamesAndDescriptions() method. We're going to
keep things simple and provide a single object, and the object we return will have two
properties. Although we can make these properties anything we want, in this case we'll
pass in code and explanation. The values for each property are static prompts. This
means that we can direct OpenAI to fill in the data for us:

// With a `StructuredOutputParser` we can define a schema for the output.


const parser = StructuredOutputParser.fromNamesAndDescriptions({

code: "JavaScript code that answers the user's question",

explanation: "detailed explanation of the example code provided"

});

const formatInstructions = parser.getFormatInstructions();

// Instantiation of a new object called "prompt" using the "PromptTemplate"


class

const prompt = new PromptTemplate({

template: "You are a programming expert and will answer the user’s
coding questions as thoroughly as possible using JavaScript. If the question is
unrelated to coding, do not answer.\n{format_instructions}\n{question}",

inputVariables: ["question"],
partialVariables: { format_instructions: formatInstructions }

});

 Just below our new parser object, we create a new


variable formatInstructions that holds the value of
the getFormatInstructions() method.

 The contents of the formatInstructions variable is passed to our template for


how we want the final response to be structured.

 We also make a small change to our template. First we add a new property
where we instantiate our prompt object called partialVariables, which is an
object that contains the key format_instructions. The format_instructions key
holds our formatInstructions as its value. Lastly, we
add format_instructions as a variable within the template itself.
Finally, we modify promptFunc() to incorporate the parser:

const promptFunc = async (input) => {

try {

// Format the prompt with the user input

const promptInput = await prompt.format({

question: input

});
// Call the model with the formatted prompt

const res = await model.invoke(promptInput);

// For a non-coding question, the model returns an error message,


causing parse() to throw an exception.

// In this case, simply return the error message instead of the parsed
results.

try {

const parsedResult = await parser.parse(res);

return parsedResult;

} catch (e) {

return res;

}
}

catch (err) {

console.error(err);

throw(err);

};

 Note that in the case that the input question is unrelated to


coding, model.invoke() will return a string error message instead of the
output formatted with code and explanation. In this case, the call
to parser.parse() will cause an exception and we simply return the
error message instead of the parsed results.
With the output parser implemented, we'll start the script using node server.js, enter a
coding question, and receive an output similar to the image below:

Now that our demo application is complete, it's your turn!


Read through the challenge.md file and try to create your own AI-powered API
application that leverages LangChain's models, prompts, templates, and output parsers!

You might also like