UCONN

UCONN
UCONN

Prompt engineering

 

Prompt engineering 

Prompt engineering is the process of designing and instructing a Large Language Model in order to get the desired output.

The more detailed and specific the prompt is the closer to the expected results you get from the LLM's accurate, relevant, or creative results possible. Crafting effective prompts can maximize the power of large language models and providing detailed and accurate prompts enable these models to understand the intent of the instructions.


Prompt Engineering Concepts


Getting to the desired output.


In everyday life for example, think of ordering a turkey sandwich. If you said I’d “like a turkey sandwich” you may get two pieces of bread and turkey from the sandwich maker. You then may have to give it back and say you want mayo on it. A better way would be to provide the person with specific details of how you like the sandwich. “I’d like a turkey sandwich with lettuce, tomato and mayo on whole wheat bread. Now the person can deliver the sandwich exactly the way you want or expect it to be.


When building prompts there are some important concepts to consider.


Persona -  When designing prompts it is helpful to define a role or identity you want the LLM to take on depending upon the task you want to complete. It also can be used to have the LLM deliver a tone or perspective.


Generally we would ask the LLM to “Act as a” Project Manager, python developer,  Financial Analyst, Editor, Professor for expertise results or persona used to craft messages to sound like toddler, teenager, grandparent, busy mom.

When using expertise personas like python developer the LMM can focus on vast amounts of code it has analyzed and provide results.


Using a tone persona can deliver simple or pleasant sounding results like a grade school teacher.


Task - Refers to the specific ask that you want the model to generate. It should clearly define the action you need produced. Use strong verbs like summarize, extract, provide, deliver, define, explain as calls to action. The task needs to be clear as to what you want.


Context - Is the background you provide the LLM. It provides the setting for your task. It explains the scenario in which the answer is based by providing specific issues related to the need of the task.


For example, I need a daily workout plan to get in shape. I can add the context - I’m a 50 year old man with bad knees. Or I draft a presentation of the benefits of AI.. Context - Presenting to high level executives who have been skeptical of its use.


Constraints - The limitations you provide to the model in order to narrow the output for usable results. It provides a set of rules for the model response.


Some constraints may be “Limit response to 500 words” because of a publishing limitation. “Do not use technical words” due to concern it may confuse the audience or “limit bullet points to 5” to ensure results don’t seem too cluttered. They allow more focused results.

instructions. detailed, specific and accurately designed to complete the task The more precise to the expected outcome the better the results.

Persona: Assigning a role (e.g., "Act as a financial analyst").

Format  - It instructs the model on how the output needs to be formatted. Tells the model that the output needs to be in python code, csv file, JSON  or spreadsheet format.

Prompting Techniques

Zero-Shot Prompting  - The simplest technique to instruct an AI model. You just give the model a single straight forward task to retrieve basic results. 

Some examples may be to translate this document into Spanish or extract the text from this image.

Few-Shot Prompting - Is a technique used to provide examples to the LLM model of the output you want it to deliver. 

You provide some exact outputs or format that you expect which provide a pattern to the model to follow.

An example may be data transformation where you would provide multiple examples of data and how it should look when finished. The model can observe the formatting that you want.

Chain-of-Thought Prompting - Instructs the AI model to use a step by step process that results in a series of logical solutions which contribute to the overall answer. It allows the model to follow a more reasoned path than the quick answer.

Take for example a company’s shipping cost increased by 10% and our orders grew at 5%. Show why this may have happened. The model then can provide possible reasons in a step by step process for this increase.

Effective Prompt Strategies

LLM’s use pattern recognition to identify relationships between words entered and the data that the model is trained on. 

The results are based on training data and they use probability not facts to state results which can lead to Hallucinations or false results

Effective prompts have a clear job description.

Persona or expertise level -. “Act as a python developer”

Specific action to be taken - “Summarize this earnings report”

Context - “Focus on income growth”

Input data - “file upload or attachment”

Format - “data should be in JSON format”

Constraint - “keep report under 300 words”

Some Examples or  Use Cases

Code generation, debugging code, modify code for new requirements.

Document Processing for summarization and bullet points.

Summarization: Distilling long documents into executive bullet points.

Extracting data and putting it into formats like JSON or CSV files.

Obtaining the sentiment  of articles or stories.

Best Practices

Make sure that you allow for a Human in the Loop to evaluate for bias, hallucinations or false results.

Do not use sensitive or confidential information as input to a public data model.

Try to provide clear objectives by using action verbs like create classify, summarize, translate.


No comments:

Post a Comment

Assignment #7 due 4/8/2026

  Take or use picture of a document(piece of mail) or any picture that has words. Use the https://uconnstamfordslp.blogspot.com/p/google-api...