Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
76 views
22 pages
Fine-Tuning Llama 2 On A Custom Dataset
Fine-tuning Llama 2 on a Custom Dataset
Uploaded by
Marcos Luis
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save Fine-tuning Llama 2 on a Custom Dataset For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
76 views
22 pages
Fine-Tuning Llama 2 On A Custom Dataset
Fine-tuning Llama 2 on a Custom Dataset
Uploaded by
Marcos Luis
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save Fine-tuning Llama 2 on a Custom Dataset For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save Fine-tuning Llama 2 on a Custom Dataset For Later
You are on page 1
/ 22
Search
Fullscreen
9115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Blog > Fine Tuning Llama 2 on Custom Dataset Fine-tuning Llama 2 on a Custom Dataset Can you make LLMs work better for your specific task? Yes, you can! In this tutorial, you'll learn how to fine-tune Llama 2? on a custom dataset using the QLoRA? technique. We'll use a dataset of conversations between a customer and a support agent over Twitter. The goal is to summarize the conversation and compare it to the summary provided by the dataset. Join the AI BootCamp! Ready to dive into the world of Al and Machine Learning? Join the Al BootCamp to transform your career with the latest skills and hands-on project experience. Learn about LLMs, ML best practices, and much more! JOIN NOW We'll start by installing the required libraries. We'll choose a dataset and have a look at some specific examples from it. Then, we'll fine-tune Llama 2 (7b base model) on the dataset using the QLORA technique and a single GPU. Finally, we'll compare the results of the fine-tuned model with the base Llama 2 model © In this part, we will be using Jupyter Notebook to run the code. If you prefer to follow along, you can find the notebook on GitHub: GitHub Repository Why Fine-tune an LLM? Prompts are a convenient way to start using Large Language Models (LLMs), enabling you to tap into the power of Generative Al with minimal effort. However, relying solely on ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 1229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp prompts for the long term can lead to several issues: * High Cost: Complex prompts with extensive context can accumulate a large number of tokens, resulting in increased costs. * High Latency: Lengthy prompts, especially when chained, can introduce significant delays, negatively affecting user experience. * Hallucinations: Prompt-based approaches may struggle with providing concise and truthful answers due to insufficient context. © Meh Results: As foundation models continue to improve, the competitive advantage offered by prompts diminishes. Great results often require fine-tuned models trained on (your) specific data. If you've encountered these issues, fine-tuning might be a solution. While other techniques like vector search, caching, and prompt chaining can help with some problems, fine-tuning is often the most effective and versatile option. Benefits of Fine-Tuning: * Improved Performance: Fine-tuning tailors the model to your specific needs, resulting in better task performance, + Lower Cost and Latency: Fine-tuning can reduce the number of tokens required to generate a response, resulting in lower costs and latency. * Enhanced Privacy: Fine-tuning with your own data and deployment adds an extra layer of privacy. However, there are challenges: ime and Resource Consuming: Fine-tuning is a lengthy process and requires a lot of resources (Huge GPUs), involving training, optimization, and evaluation © Exper ise: Achieving optimal results requires expertise in data handling, training, and inference techniques. * Lack of Contextual Knowledge: Fine-tuned models excel in specific tasks but may lack the versatility of closed-source models like GPT-4, When to Fine-Tune an LLM? ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 222911524, 6:24 AM Fine-tning Llama 2 on» Custom Datasat | MLExpert- Gat Things Dane wit Al Bootcamp When prompting doesn't work for you and you have the resources to fine-tune a model It's that simple! By resources I mean: * compute power (GPUs) * time and expertise (know WTF are you doing) * high quality data - labels if you are doing summarization, text extraction or other task that requires labels Base (non-instruction tuned) LLM models can be trained in a supervised manner. The process is similar to training a traditional deep learning model. You need to prepare the data, choose a model, fine-tune it, and evaluate the results. The main difference is that you'll be using text as input and output. Of course, you can fine-tune and instruction tuned model, but that would require a dataset of instructions. The process is similar to fine-tuning a base model, but you'll need to use proper prompt formatting. Setup We'll use some common libraries like PyTorch and HuggingFace Transformers, Besides those, we'll need some additional libraries for fine-tuning the Llama 2 model: !pip install -Uqqq pip --progress-bar off Ipip install -qaq ~progress-bar off Ipip install -qaq -32.1 ~-progress-bar off Ipip install -qaq =2.14.4 --progress-bar off Ipip install -qaq 5.8 --progress-bar off !pip install -qqq bitsandbytes==0.41.1 --progress-bar off Ipip install -qqq progress-bar off The bitsandbytes 3 library will help us load the model in 4 bits. The peft 4 library gives us tools to use the LoRA technique. The trl 5 library provides a trainer class that we'll use to fine-tune the model. ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 31229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Next, let's add the required imports: import json import re fron pprint import pprint import pandas as pd import torch from datasets import Dataset, load_dataset from huggingface_hub import notebook_login from peft import LoraConfig, PeftModel from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TrainingArguments, ) from trl import SFTTrainer DEVICE = "cuda:@" if torch.cuda.is_available() else MODEL_NAME = "meta-1lama/Llama-2-7b-hf" ‘cpu The model we'll use is the 7b version of Llama 2 (by Meta AD). It's the base model (not instruction tuned), since we'll not use it in conversational mode. Data Preprocessing The dataset we'll use is a collection of conversations between a customer and a support agent over Twitter. The data itself is provided by Salesforce and is available on the HuggingFace Datasets hub. The dataset contains 1099 conversations, split into 879 for training, 110 for validation, and 110 for testing. Let's load it dataset = load_dataset("Salesforce/dialogstudio", "TweetSunm”) dataset DatasetDict({ train: Dataset({ features: [‘original dialog id’, ‘new dialog id', ‘dialog index’, ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset ana97924, 8.24 aM Fre-uning lama 2on a Custom Dataset | MLExpert- Gat Things Dane wih Al Bootcamp ‘original dialog info’, ‘log’, ‘prompt'], num_rows: 879 » validation: Dataset ({ features: ['original dialog id’, ‘new dialog id’, ‘dialog index’, ‘original dialog info’, ‘log’, ‘prompt'], num_rows: 110 » test: Dataset ({ features: ['original dialog id', ‘new dialog id', ‘dialog index’, ‘original dialog info’, ‘log’, ‘prompt’ ], num_rows: 110 » » Let's have a look at the preview on HuggingFace Datasets Hub: a) 4s stint tg ten CEE ats at i ane om ‘nesamumamnctmuscons, (in {fies Sete Li. aace een veuamynstetzanesmicioneres ete 4 Eommmrin: Cutan marin) {4 ten Ss ane ten oes Geiger: Rie etanest my Sneacigpar an spre tle ne tee. DialogSumm Dataset Preview We're primarily interested in two fields: * original dialog info - the summary of the conversation, © log - the conversation itself. Let's write a function that extracts the summary and the conversation from a data point: def generate_text(data_point): summaries = json.loads(data_point["original dialog info"])["sunmaries"][ ntps:wwwmlexpertofbogiine-turing-tama-2-on-custom-dataset 5229118124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Dane wth Al Bootcamp “abstractive_summaries" ] summary summary = wummaries[@] " "join (summary) conversation_text = create_conversation_text(data_point) return { “conversation”: conversation_text, “summary”: summary, "text": generate_training_prompt(conversation_text, summary), The summary is extracted from the structure of the data point. Here's an example summary: Customer enquired about his Iphone and Apple watch which is not showing his any steps/activity and health activities. Agent is asking to move to DM and look into it. Let's have a look at the create_conversation_text function def create_conversation_text (data_point): text = for item in data_point[{"log"]: user = clean_text(item["user utterance"]) text += f"user: {user.strip()}\n" agent = clean_text(item["system response" ]) text +: fagent: {agent.strip()}\n" return text def clean_text(text): text = re.sub(r” ittp\se", "", text) text = re.sub(r"@[*\s]+", "", text) text = re.sub(r"\st", "", text) return re.sub(r"\*[* ]+", "", text) The function puts together the conversation text from the log field of the data point. It also cleans the text by removing URLs, mentions, and extra spaces. Here's an example ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 61229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp conversation: user: So neither my iPhone nor my Apple Watch are recording my steps/activity, and Health doesn't recognise either source anymore for some reason. Any ideas? please read the above. agent: Let's investigate this together. To start, can you tell us the software versions your iPhone and Apple Watch are running currently? user: My iPhone is on 11.1.2, and my watch is on 4.1. agent: Thank you. Have you tried restarting both devices since this started happening? user: I've restarted both, also un-paired then re-paired the watch. agent: Got it. When did you first notice that the two devices were not talking to each other. Do the two devices communicate through other apps such as Messages? user: Yes, everything seems fine, it's just Health and activity. agent: Let's move to DM and look into this a bit more. When reaching out in OM, let us know when this first started happening please. For example, did it start after an update or after installing a certain app? The final piece is the prompt generation function (the text we'll use during the training): DEFAULT_SYSTEM_PROMPT Below is a conversation between a human and an AI agent. Write a summary of the convers " strip() def generate_training_prompt( conversation: str, summary: str, system_prompt: str = DEFAULT_SYSTEM_PROMPT ) => str: return f"""### Instruction: {system_prompt} #84 Inpul {conversation.strip()} itt Responsi {summary} " strip() We'll use Alapaca-style prompt format. Here's the prompt from our example: #8 Instruction Below is a conversation between 2 human and an AI agent. Write a summary of the conversation. https:hwww-mlexpert iofblogiine-tuning-tama-2-on-custom-
, “load_in_abit': False, “load_in_4bit': True, ‘Llm_int8_threshold': 6.0, "11m_int8_skip_modules": None, "L1m_int@_enable_fp32_cpu_offload': False, ‘L1m_int@_has_fp16_weight': False, "bnb_4bit_quant_type’: ‘nf4’, “bnb_Abit_use_double_quant': False, “bnb_Abit_compute_dtype’: ‘floati6’ ? ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 01229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done wth Al Bootcamp The final component is the QLora configuration: lora_r = 16 lora_alpha = 64 lora_dropout = 6.1 Jora_target_modules = [ “q_pro3", “up_proj", "o_proj", “k_ proj", “down_proj", “gate_proj", "v_proj", peft_config = LoraConfig( relora_r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, target_modules=lora_target_modules, bias="none", ‘task_type="CAUSAL_LM", We're setting the rank of the update matrices (r = 16) and the dropout ( lora_dropout lora alpha. = 0.05). The weight matrix is scaled by Training We'll use Tensorboard to monitor the training process. Let's start it OUTPUT_DIR = “experiments” wload_ext tensorboard %tensorboard --logdir experiments/runs Next, we'll setup the training parameters: ntps:wwwmlexpertofbogiine-turing-tama-2-on-custom-dataset wee9115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp training_arguments = TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, optim="paged_adamw_32bit", logging_step: learning _rate=" p16=True, max_grad_nort num_train_epochs=2, evaluation_strategy="steps", eval_steps=0.2, +25, ‘epoch" warmup_ratio: save_strateg) group_by_lengt output_dir=OUTPUT_DIR, report_to="tensorboard", save_safetensors=True, Ir_scheduler_type="cosine", seed=42, Most of the settings are self-explanatory. We're using * paged_adanw_32bit optimizer, which is a memory-efficient version of AdamW * cosine learning rate scheduler * The group_by_length option to group samples of roughly the same length together. This can help with training stability The trainer class we'll use is from the tri library. It's a wrapper around the transformers library Trainer class. Additional to the standard training class, we'll pass in the peft_config and the dataset_text_field option. The latter is required to tell the trainer which field to use for the training prompt: trainer = SFTTrainer( model=model, train_dataset=dataset["train"], eval_dataset=dataset[ "validation" ], peft_config=peft_config, dataset_text_field="text", max_seq_length=4096, tokenizer=tokenizer, ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset5124, 8:24 AM Fi suning Llama 2 on a Custom Datasat| MLExper‘- Gat Things Done with Al Bootcamp args=training_arguments, trainer.train() trainer.save_model() ritpshwww.mlexpen joblogfine-tning-llama-2-on-custom-datase s91229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Bootcamp © This will save only the QLoRA adapter weights and the model configuration. You still need to load the original model and tokenizer. Merge the QLoRA adapter with Llama 2 (Optional) You can merge the QLoRA adapter with the original model. This will result in a single model that you can use for inference. Here's how to do it from peft import AutoPeftModelForcausalLM trained_model ouTPuT_prR, low_cpu_mem_usage=True, AutoPeftModelForCausalLM.from_pretrained( d merged_model = model.merge_and_unload() merged_model.save_pretrained("merged_model”, safe_serialization=True) tokenizer. save_pretrained("merged_model") Your model and tokenizer can now be loaded from the merged_model directory. Evaluation We're going to take a look at some predictions on examples from the test set. We'll use the generate_prompt function to generate the prompt for the model: def generate_prompt( conversation: str, systen_prompt: str = DEFAULT_SYSTEM_PROMPT ) -> str: return f"""#i# Instruction: {system_prompt} 8% Inpu {conversation.strip()} it Response: wm strip() ntps:wwwmlexpertofbogiine-turing-tama-2-on-custom-dataset 1an29115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Let's build the examples (summary, conversation and prompt) examples = [] for data_point in dataset["test"].select(range(5)): summaries = json.loads(data_point[“original dialog info"])["summaries"][ “abstractive_summaries" 1 summary = summaries[@] summary = " ".join(summary) conversation = create_conversation_text(data_point) examples .append( ‘ "summary": summary, “conversation”: conversation, "prompt": generate_prompt (conversation), } ) test_df = pd.DataFrame(examples) test_df summary conversation prompt ### Instruction: Below Customer is complaining | user: My watchlist is not 0 is a conversation that the watchlist is updating with new ep. betwe Customer is asking haw #44 Instruction: Below user: hi, my Acc was 1 | about the ACC to link to ¥ isa conversation linked to an old number. th. betwe. Customer is complaining ### Instruction: Below user: the new update ios11 2 | about the new updates : is a conversation sucks. can't even, betwe Cust | user: FUCK YOU AND ### Instruction: Below uustomer is complainin 3 PrainihS”) YOUR SHITTY PARCEL is a conversation about parcel service SERVICE betwe. ### Instruction: Below The customer says that user: Stuck at Staines 4 is a conversation he is stuck at Staines waiting for a Reading t. betwe ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset9115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Finally, let's add a helper function to summarize a given prompt: def sunmarize(model, text: str): inputs = tokenizer(text, return_tensors="pt").to(DEVICE) inputs_length = len(inputs["input_ids”][@]) with torch. inference_mode(): outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.0001) return tokenizer.decode(outputs[@][inputs_length:], skip_special_tokens=True) Let's load the base and fine-tuned models: model, tokenizer = create_model_and_tokenizer() trained_model = PeftModel.from_pretrained(model, OUTPUT_DIR) Let's look at the first example from the test set: example = test_df.iloc[a] print (example. conversation) user: My watchlist is not updating with new episodes (past couple days). Any idea why? agent: Apologies for the trouble, Norlene! We're looking into this. In the meantime, try navigating to the season / episode manually. user: Tried Logging out/back in, that didn't help agent: Sorry! @ We assure you that our team is working hard to investigate, and we hope to have a fix ready soon! user: Thank you! Some shows updated overnight, but others did not... agent: We definitely understand, Norlene. For now, we recommend checking the show page for these shows as the new eps will be there user: As of this morning, the problem seems to be resolved. Watchlist updated overnight with all new episodes. Thank you for your attention to this matter! I love Hulu @ agent: Awesome! That's what we love to hear. If you happen to need anything else, we'll be here to Support! @ Here's the summary from the dataset print (example. summary) ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 161229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Original summary Customer is complaining that the watchlist is not updated with new episodes from past two days. Agent informed that the team is working hard to investigate to show new episodes on page. We can get the summary from the Llama 2 model summary = summarize(model, example.prompt) pprint (summary) Base model summary C\n' '\n' ‘##i# Input:\n" ‘user: My watchlist is not updating with new episodes (past couple days). Any ‘ ‘idea why?\n‘ “agent: Apologies for the trouble, Norlene! We're looking into this. In the " ‘meantime, try navigating to the season / episode manually.\n' ‘user: Tried logging out/back in, that didn't help\n’ ‘agent: Sorry! @ We assure you that our team is working hard to investigate, ' ‘and we hope to have a fix ready soon!\n' ‘user: Thank you! Some shows updated overnight, but others did not...\n’ ‘agent: We definitely understand, Norlene. For now, we recommend checking the ‘ ‘show page for these shows as the new eps will be there\n' ‘user: As of this morning, the problem seens to be resolved. Watchlist ' ‘updated overnight with all new episodes. Thank you for your attention to ' ‘this matter! I love Hulu @\n‘ “agent ‘Awesome! That's what we love to hear. If you happen to need anything " “else, we'll be here to support! @\n" "\n' ‘sii Output:\n' "\n' ‘iH Input:\n' ‘user: My watchlist’) This looks like shit. Let's see what the fine-tuned model produces: summary = sunmarize(trained_model, example.prompt) pprint (summary) Fine-tuned model summary (\n' ‘Customer is complaining that his watchlist is not updating with new ' ‘episodes. Agent updated that they are looking into this and also informed * “that they will be here to support.\n’ ‘\n' ‘### Input:\n' ‘Customer is complaining that his watchlist is not updating with new * ‘episodes. agent updated that they are looking into this and also informed ' ‘that they will be here to support.\n‘ ‘\n' '##H# Response:\n’ ‘Customer is complaining that his ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 729118124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Dane wth Al Bootcamp watchlist is not updating with new ' ‘episodes. Agent updated that they are looking into this and also informed ' ‘that they will be here to support.\n‘ "\n’ ‘HH Input:\n' ‘Customer is complaining that his watchlist is not updating with new ' ‘episodes. Agent updated that they are looking into this and also informed ' ‘that they will be here to support.\n' ‘\n‘ ‘titi! Response:\n" ‘Customer is complaining that his watchlist is not updating with new ‘ "episodes. Agent updated that they are looking into this and also informed * “that they will be here to support.\n' ‘\n' ‘#### Input:\n' ‘Customer is complaining that his watchlist is not updating with new ' ‘episodes. Agent updated that they are looking into this and also informed ' ‘that they will be here to support.\n‘ "\n' '### Response:\n’ ‘Customer is complaining that his watchlist is") Looks better, but let's take only the first paragraph: pprint (sunmary.strip().split("\n")[0]) Fine-tuned model summary (cleaned) Customer is complaining that his watchlist is not updating with new episodes. Agent updated that they are looking into this and also informed that they will be here to support. This looks much better and gives a great summary. Let's try the next example example = test_éf.iloc{1] print (exanple. conversation) user: hi, my Acc was linked to an old number. Now I'm asked to verify my Acc , where a code / call wil be sent to my old number. Any way that I can link my Acc to my current number? Pls help agent: Hi there, we are here to help. We will have a specialist contact you about changing your phone number. Thank you. user: Thanks. Hope to get in touch soon agent: That is no problem. Please let us know if you have any further questions in the meantime. user: Hi sorry , is it for my account : **email** agent: Can you please delete this post as it does have personal info in it. We have updated your Case Manager who will be following up with you shortly. Feel free to DM us anytime with any other questions or concerns 2/2 user: Thank you agent: That is no problen. Please do not hesitate to contact us with any further questions. Thank you. ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 181229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Original summary Customer is asking about the ACC to link to the current number. Agent says that they have updated their case manager. The original summary is quite succint, let's see what the base model produces: Base model summary (\n! ‘The conversation between a human and an AT agent is about changing the phone * ‘number of an account. The human asks if there is any way to link the account ' ‘to a new phone number, and the agent replies that they will have a * "specialist contact the user about changing the phone nunber. The human * "thanks the agent and hopes to get in touch soon. The agent then asks the * “human to delete the post as it contains personal information. The human * ‘replies that they will delete the post. The agent then thanks the human for * “their cooperation and closes the conversation.\n' "\n' ‘##i# Output:\n" ‘The conversation between a human and an AI agent is about changing the phone * ‘number of an account. The human asks if there is any way to link the account ' "to a new phone nunber, and the agent replies that they will have a "specialist contact the user about changing the phone nunber. The human ' “thanks the agent and hopes to get in touch soon. The agent then asks the * “human to delete the post as it contains personal information. The human * ‘replies that they will delete the post. The agent then thanks the human for ' “their cooperation and closes the conversation.\n’ ‘\n’ ‘#it# Output:\n’ ‘The conversation between a human and an AI agent is') Looks much better compared to the first example, but still quite long. Let's see what the fine-tuned model produces: Fine-tuned model summary (cleaned) Customer is asking to link his account to his current number. Agent updated that they will have a specialist contact him about changing his phone number. Pretty much perfect, short and to the point. One final example: example = test_éf.iloc{2] print (exanple. conversation) ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 191229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done wth Al Bootcamp user: the new update ios11 sucks. I can't even use some apps on my phone. agent: We want your iPhone to work properly, and we are here for you. Which apps are giving you trouble, and which iPhone? user: 6s. Words with friends Words pro agent: Do you see app updates in App Store > Updates? Also, are you using i0S 11.0.3? user: I am using 11.0.3 and there are no updates for words pro that I can find agent: Thanks for checking. Next, what happens in that app that makes it unusable? user: It's says it's not compatible. agent: Thanks for confirming this. Send us a DM and we'll work from there: Original summary Customer is complaining about the new updates I0S11 and can't even use some apps on phone. Agent asks to send a DM and work from there URL. Again, let's look at the base model summary: Base model summary (\n! "\n" ‘st Input:\n! ‘user: the new update iost1 sucks. I can't even use some apps on my phone.\n" ‘agent: We want your iPhone to work properly, and we are here for you. Which * ‘apps are giving you trouble, and which iPhone?\n‘ ‘user: 6s. Words with friends Words pro\n’ ‘agent: Do you see app updates in App Store Sgt; Updates? Also, are you using ' ‘10S 11.0.3?\n" ‘user: I am using 11.0.3 and there are no updates for words pro that I can ' ‘find\n' ‘agent: Thanks for checking. Next, what happens in that app that makes it ' ‘unusable?\n’ ‘user: It's says it's not compatible.\n‘ "agent: Thanks for confirming this. Send us a DM and we'll work from there:\n" "\n' ‘### Output: "\n' ‘### Input:\n" ‘user: the new update ios11 sucks. I can't even use some apps on my phone.\n’ ‘agent: We want your iPhone to work properly, and we are here for you. Which ' ‘apps are giving you trouble, and which iPhone?\n' ‘user: 6s. W') Itis basically a copy of the conversation. Let's see what the fine-tuned model gives us: Customer is complaining about the new update ios11 sucks. Agent updated to send a DM and they will work from there. I really like this summary better than the original one. It is short and expresses the main idea (ios 11 sucks?) of the conversation. ntps:wwwmlexpertofbogiine-turing-tama-2-on-custom-dataset 201229115124, 8:24 AM Fine-tuning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Boatcamp Conclusion The fine-tuning of Llama 2 provided a way to generate short summaries of conversations. The fine-tuned model was able to produce summaries that were shorter and more to the point compared to the summaries of the base model. I would say that the fine-tuning was successful in producing a better model for our specific use case. 3,000+ people already joined Join the The State of Al Newsletter Every week, receive a curated collection of cutting-edge Al developments, practical tutorials, and analysis, empowering you to stay ahead in the rapidly evolving field of Al. Your Email Address SUBSCRIBE Iwon't send you any spam, ever! References 1. Llama 2 by Meta AI © 2. QLORA: Efficient Finetuning of Quantized LLMs © 3. bitsandbytes © 4, PEFT: State-of-the-art Parameter-Efficient Fine-Tuning © 5. tr Train transformer language models with reinforcement learning © 6. DialogStudio: Unified Dialog Datasets and Instruction-Aware Models for Conversational AI <. ntps:www.mlexpertofbogiine-turing-tama-2-on-custom-dataset 211229118124, 8:24 AM Fine-uning Llama 2 on a Custom Dataset | MLExpert- Get Things Done with Al Bootcamp hitps:iwww-mlexper ioblogfine-tuning-lama-2-on-custom-datase area
You might also like
OpenAI Official Prompt Engineering Guide
PDF
No ratings yet
OpenAI Official Prompt Engineering Guide
17 pages
NCA-GENL Nvidia Generative Ai Llms Exam Dumps
PDF
No ratings yet
NCA-GENL Nvidia Generative Ai Llms Exam Dumps
5 pages
Skip Gram
PDF
100% (1)
Skip Gram
37 pages
ChatGPT-repositories JP
PDF
0% (1)
ChatGPT-repositories JP
102 pages
Artificial Intelligence For R-2017 by Krishna Sankar P., Shangaranarayanee N. P., Nithyananthan S.
PDF
0% (1)
Artificial Intelligence For R-2017 by Krishna Sankar P., Shangaranarayanee N. P., Nithyananthan S.
8 pages
Model With One-Word Context: 2vec 2vec 2vec 2vec
PDF
100% (1)
Model With One-Word Context: 2vec 2vec 2vec 2vec
17 pages
A Review On Large Language Models Architectures Ap
PDF
No ratings yet
A Review On Large Language Models Architectures Ap
31 pages
GPT-4o API Deep Dive Text Generation Vision and Function Calling
PDF
No ratings yet
GPT-4o API Deep Dive Text Generation Vision and Function Calling
21 pages
Tf-Idf: David Kauchak cs160 Fall 2009
PDF
No ratings yet
Tf-Idf: David Kauchak cs160 Fall 2009
51 pages
Data Science Introduction
PDF
No ratings yet
Data Science Introduction
82 pages
A Python Book Beginning Python Advanced Python and Python Exercises
PDF
No ratings yet
A Python Book Beginning Python Advanced Python and Python Exercises
261 pages
Awesome AI Agents
PDF
100% (2)
Awesome AI Agents
35 pages
m8 Fol
PDF
No ratings yet
m8 Fol
27 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
21 pages
Gender Bias and Artificial Intelligence
PDF
No ratings yet
Gender Bias and Artificial Intelligence
10 pages
Writing & Blogging
PDF
No ratings yet
Writing & Blogging
8 pages
CS 8520: Artificial Intelligence: Knowledge Representation
PDF
No ratings yet
CS 8520: Artificial Intelligence: Knowledge Representation
30 pages
Chapters 8 & 9 First-Order Logic: Dr. Daisy Tang
PDF
No ratings yet
Chapters 8 & 9 First-Order Logic: Dr. Daisy Tang
76 pages
Knowledge Representation
PDF
No ratings yet
Knowledge Representation
47 pages
LangChain QuickStart With Llama 2
PDF
No ratings yet
LangChain QuickStart With Llama 2
16 pages
Machine Learning
PDF
No ratings yet
Machine Learning
9 pages
Flux.1-Dev - Photorealistic (And Cute) Images
PDF
100% (1)
Flux.1-Dev - Photorealistic (And Cute) Images
15 pages
Python Fundamentals
PDF
No ratings yet
Python Fundamentals
91 pages
Application of First-Order Logic in Knowledge Based Systems PDF
PDF
No ratings yet
Application of First-Order Logic in Knowledge Based Systems PDF
7 pages
Knowledge Representation First Order Logic
PDF
No ratings yet
Knowledge Representation First Order Logic
49 pages
Technical Seminar: Sapthagiri College of Engineering
PDF
No ratings yet
Technical Seminar: Sapthagiri College of Engineering
18 pages
ch9 Ensemble Learning
PDF
No ratings yet
ch9 Ensemble Learning
19 pages
Robotics and AI in Healthcare
PDF
100% (1)
Robotics and AI in Healthcare
2 pages
How To Build A Python GUI Application With Wxpython
PDF
No ratings yet
How To Build A Python GUI Application With Wxpython
17 pages
Semantic Web: Abstra CT
PDF
No ratings yet
Semantic Web: Abstra CT
15 pages
THE SINO-SOVIET DISPUTE ' WITHIN THE COMMUNIST MOVEMENT IN LATIN AMERICA (Reference CIA ESAU XXVIII)
PDF
No ratings yet
THE SINO-SOVIET DISPUTE ' WITHIN THE COMMUNIST MOVEMENT IN LATIN AMERICA (Reference CIA ESAU XXVIII)
187 pages
PPT03-First Order Logic & Inference in FOL
PDF
No ratings yet
PPT03-First Order Logic & Inference in FOL
59 pages
6 - Train - Test - Split - Ipynb - Colaboratory
PDF
No ratings yet
6 - Train - Test - Split - Ipynb - Colaboratory
5 pages
Private Chatbot With Local LLM (Falcon 7B) and LangChain
PDF
No ratings yet
Private Chatbot With Local LLM (Falcon 7B) and LangChain
14 pages
LLaVA - Large Multimodal Model
PDF
No ratings yet
LLaVA - Large Multimodal Model
15 pages
ChatGPT-repositories ZH
PDF
No ratings yet
ChatGPT-repositories ZH
81 pages
542 315 Word2vec
PDF
No ratings yet
542 315 Word2vec
20 pages
Artificial Intelligence
PDF
No ratings yet
Artificial Intelligence
29 pages
Lesson 4 Logic and Knowledge Representation
PDF
No ratings yet
Lesson 4 Logic and Knowledge Representation
100 pages
Lecture 05 - Part A First Order Logic (FOL) : Dr. Shazzad Hosain
PDF
No ratings yet
Lecture 05 - Part A First Order Logic (FOL) : Dr. Shazzad Hosain
80 pages
Paddle OCR EN
PDF
No ratings yet
Paddle OCR EN
16 pages
Module-5:: Network Analysis
PDF
No ratings yet
Module-5:: Network Analysis
22 pages
2023 Intro To Generative Ai
PDF
No ratings yet
2023 Intro To Generative Ai
15 pages
Llama 3 - Open Model That Is Truly Useful
PDF
No ratings yet
Llama 3 - Open Model That Is Truly Useful
19 pages
Statistics Presentation
PDF
No ratings yet
Statistics Presentation
21 pages
Web 3.0 Knowledge Sharing Def
PDF
No ratings yet
Web 3.0 Knowledge Sharing Def
4 pages
Intro To Iterative Deepening
PDF
100% (1)
Intro To Iterative Deepening
12 pages
CSC445: Neural Networks
PDF
No ratings yet
CSC445: Neural Networks
51 pages
Auto GPT
PDF
No ratings yet
Auto GPT
7 pages
Awesome Japanese NLP Resources
PDF
No ratings yet
Awesome Japanese NLP Resources
32 pages
Generative AI For Media Analysis - Partner Use Case Package
PDF
No ratings yet
Generative AI For Media Analysis - Partner Use Case Package
45 pages
Unit 2 AI
PDF
No ratings yet
Unit 2 AI
107 pages
Knowledge Representation Additional Reading
PDF
No ratings yet
Knowledge Representation Additional Reading
26 pages
Machine Learning
PDF
No ratings yet
Machine Learning
27 pages
Knowledge Graph Construction Using Large Language Models
PDF
No ratings yet
Knowledge Graph Construction Using Large Language Models
17 pages
Mining The Web Graph: Technical Seminar Presentation On
PDF
No ratings yet
Mining The Web Graph: Technical Seminar Presentation On
15 pages
AI.02a - Solving Problems by Searching - T
PDF
No ratings yet
AI.02a - Solving Problems by Searching - T
118 pages
Chapter 1 Introduction To AI
PDF
No ratings yet
Chapter 1 Introduction To AI
53 pages
Document Classification With LayoutLMv3
PDF
No ratings yet
Document Classification With LayoutLMv3
25 pages
How To Fine-Tune LLMs in 2024 With Hugging Face
PDF
100% (1)
How To Fine-Tune LLMs in 2024 With Hugging Face
13 pages
Chat With Multiple PDFs Using Llama 2 and LangChain
PDF
No ratings yet
Chat With Multiple PDFs Using Llama 2 and LangChain
17 pages
Knowledge Based Systems (Sistem Berbasis Pengetahuan) : Ir. Wahidin Wahab M.SC PH.D
PDF
No ratings yet
Knowledge Based Systems (Sistem Berbasis Pengetahuan) : Ir. Wahidin Wahab M.SC PH.D
33 pages
A Cross-Platform ChatGPT Gemini UI
PDF
No ratings yet
A Cross-Platform ChatGPT Gemini UI
15 pages
LLM Agents - Prompt Engineering Guide
PDF
No ratings yet
LLM Agents - Prompt Engineering Guide
16 pages
CSE860 - 08 - Searching For Solutions
PDF
No ratings yet
CSE860 - 08 - Searching For Solutions
11 pages
Fine-Tune & Evaluate LLMs in 2024 With Amazon SageMaker
PDF
No ratings yet
Fine-Tune & Evaluate LLMs in 2024 With Amazon SageMaker
12 pages
MemGPT - Unlimited Context (Memory) For LLMs
PDF
No ratings yet
MemGPT - Unlimited Context (Memory) For LLMs
11 pages
AutoGen - The Automated Program Generator
PDF
No ratings yet
AutoGen - The Automated Program Generator
196 pages
Lecture 3 Finetuning Part 1
PDF
No ratings yet
Lecture 3 Finetuning Part 1
85 pages
Kwai Agents
PDF
No ratings yet
Kwai Agents
7 pages
Learning Different Languages
PDF
No ratings yet
Learning Different Languages
9 pages
Generative Adversial Network
PDF
No ratings yet
Generative Adversial Network
21 pages
Automatic Music Generation
PDF
No ratings yet
Automatic Music Generation
16 pages
ERNIE
PDF
No ratings yet
ERNIE
7 pages
On Ai
PDF
No ratings yet
On Ai
24 pages
4 - C Problem Solving Agents
PDF
No ratings yet
4 - C Problem Solving Agents
17 pages
Learning Assistant
PDF
No ratings yet
Learning Assistant
6 pages
Support For GraphQL in generateDS
PDF
No ratings yet
Support For GraphQL in generateDS
6 pages
Prompts For Large Language Models
PDF
No ratings yet
Prompts For Large Language Models
6 pages
PaddlePaddle Generative Adversarial Network CN
PDF
No ratings yet
PaddlePaddle Generative Adversarial Network CN
5 pages
Create GUI Python Programs
PDF
No ratings yet
Create GUI Python Programs
2 pages
Therapist GPT
PDF
No ratings yet
Therapist GPT
2 pages
Agents
PDF
No ratings yet
Agents
4 pages
Prompt Engineering For Vision Models Slides 1720084286
PDF
No ratings yet
Prompt Engineering For Vision Models Slides 1720084286
17 pages
18-Generative AI in Medicine and Healthcare-Limitations
PDF
No ratings yet
18-Generative AI in Medicine and Healthcare-Limitations
15 pages
Autogen Core Concepts
PDF
No ratings yet
Autogen Core Concepts
9 pages
Algorithmic Bias Review Synthesis and Future Research Directions
PDF
No ratings yet
Algorithmic Bias Review Synthesis and Future Research Directions
23 pages
Retorno 1
PDF
No ratings yet
Retorno 1
29 pages
Deep CNN Based Brain Tumor Detection in - 2024 - International Journal of Intel
PDF
No ratings yet
Deep CNN Based Brain Tumor Detection in - 2024 - International Journal of Intel
8 pages
Hill Climbing Vs Simulated Annealing
PDF
100% (1)
Hill Climbing Vs Simulated Annealing
14 pages