Logo

dev-resources.site

for different kinds of informations.

Prompting for purchasing: Shopping lists & evaluation matrixes (Part 2)

Published at
11/26/2024
Categories
promptengineering
llm
nlp
Author
danielrosehill
Categories
3 categories in total
promptengineering
open
llm
open
nlp
open
Author
14 person written this
danielrosehill
open
Prompting for purchasing: Shopping lists & evaluation matrixes (Part 2)

(Great for high ticket purchasing research or ... agonising over which keyboard to pick out)

You know when you're looking to buy something and it almost seems like the perfect thing but then it's missing that one thing?

Alongside all life's woes (I jest), ChatGPT et al can render those a thing of the distant past.

But before we bid a fond goodbye to agonising purchasing decisions, let's see how we can use these tools effectively to do the job of sizing up Thing A vs. Thing B.

Today's prompting strategy will be:

  • Listing our requirements
  • Instructing the LLM to produce a table/matrix with each item evaluated

In a hopefully not too distant future, LLMs will be good enough that we can just say: only show me keyboards meeting all these requirements and then retreat back into your AI home(.)

But my experience has been that their skills aren't quite there yet and thus sometimes forcing them to list all the spec requests works better.

This prompting strategy is quite powerful. Using it, you can whittle a huge swamp of potential purchasing options down to a very tidily targeted list with just a little bit of the magic ingredient of prompting: specificity.

Image description

Purchasing list prompt skeleton

Here's the backbone of a prompt we could use to find, per our example, an ergonomic keyboard:

I'm looking for a keyboard:

These features are essential (group A):

  • Ergonomic design (split layout etc)
  • Quiet / silent operation
  • No RGB or ability to disable RGB

These features are less essential but still important to me (group B):

  • Wireless support (dongle ideal)
  • Mechanical operation

And finally, these features are less important but also desirable (group C):

  • Macro keys

Must be:

  • RRP < $300
  • Available for Prime delivery on Amazon.com

Image description

Part 2: Add an output instruction to the prompt

That was the 'core' of my prompt which attempts to set out my requirements in a somewhat logical fashion.

Now, I'm going to attach to that an output instruction which will instruct the LLM on exactly how I want it to format the output it generates.

I'll show three versions of these to show how you can use small variations in prompt-writing to instruct for very different outputs.

Output Instruction, V1: Structured List Generation

Add this for a formatted list:

Format your output like this. In the second half of the example, the text in the brackets describe what the variables should represent:

# Keyboard Name
## RRP & Manufacturer

Ergonomic: (Does it feature an ergonomic design? If so, which?)  
Wireless: (What kind of wireless does it have? Dongle / Bluetooth / both?)  
Quiet/silent: (How does it operate in a quiet way?)  
Mechanical: (What switch does it use? Or if it's not mechanical what mechanism?)  
Macro keys: (How many macro keys, if any?)
Enter fullscreen mode Exit fullscreen mode

The downside is that specifying how you want the LLM to generate the output like this is a bit tedious. The upside is that when they work they work well: you get a targeted run-through of everything that matters to you about whatever it is that you're buying.

Imagine that instead of buying a keyboard you were buying a new TV or an expensive laptop. It might be worth the time invested in laying out your requirements very precisely.

Image description

Output Instruction, V2: Data Matrix / Table

If you use a markdown notepad editor, then you can specifically ask the LLM to generate its output as a markdown table that you can copy and paste right into it. Or if you use Google Drive, you can use its markdown integration and do exactly the same thing.

I usually prompt something like and the codefence part is only there because, right now, I'm using Perplexity quite a bit (and it tends to strip them out, rendering the table un-copy-able. Hence, I instruct it explicitly):

Format your output as a markdown table enclosed within a codefence. The markdown table should list the following parameters: Product | Manufacturer | RRP | Wireless Support

... you get the idea.

Bear in mind that if we ask for a table with all of the variables I asked the LLM to review, it's going to get pretty unwieldy very quickly.

So instead you might want to instruct like this:

In column 3, include your assessment of all the spec parameters that I provided.

This way, you'll tell the LLM to box of all that part of its output into one section of the table.

Image description

FYI: You can target a specific data structure in your prompt, too

In my experience, LLMs aren't particularly picky about how you tell them to lay out tables.

But many don't know that you actually can instruct them to output data in specific arrangements of rows and columns.

This facet of output format instructing is particularly useful when you're combining multiple outputs into one big data structure (like aggregating prompts generating CSV data that maintains a consistent header row).

In this case, if you can be sure that the LLM is going to adhere to a consistent header row format, you can (usually) cobble together the data of a prompt chain to build surprisingly big CSVs.

As to how to do it ... so far, it seems to be more art than science.

Sometimes, I'll write my request as a narrative:

In the first column, list the keyboard. In the second, list its RRP. In the third, list a summary of the specs.

In others, I'll use pipe symbols to denote the target layout in a more traditional way, as if I were actually writing a table using markdown:

Lay out the columns in the table exactly like this: Laptop Name | RRP | Specs

In my experience, there's not much difference in the predictability of the result.

Both work pretty well. But if you're really targeting data consistency, provide an explicit and properly formatted header row and instruct the LLM to output in exactly that format.

Here's the header row of my CSV. Make sure that you adhere to this structure exactly. {Paste header row}.

Final formatting tip:

If you're doing some more extensive data evaluation and you're hoping to pipe the data into a spreadsheet or database, ask explicitly for 'CSV or 'raw CSV' (or TSV or JSON, etc) and the LLM will output to your chosen format.

V3: Give me a list, then give me a table

Finally we get to the last permutation of the output instruction which is asking the LLM to do both things with the data it has gathered: Ie, give me it to me as a list, then give it to me as a summary table. Then go back to AI-land.

This is a really nice information format that's easy to digest, but it (naturally) runs a greater risk than the preceding approaches of running into output length limits.

It's easy to forget how capable LLMs are and how much we routinely ask them to do with even simple instructions. In this example, for instance, we're telling the LLM to:

  • Read our prompt (tokenise the words, process their meaning)
  • Find the top 5 matches (now, search for results against real time data!)
  • Grade them all according to our spec system
  • Format that into a list

Of course, you can try prompting for it especially if you're using an LLM with a generous output length window. Or, if you're not, you can divide the prompts by using chaining.

E.g, prompt 1:

Find the keyboards, format that as a list

And then, when you get output 1:

Take this list and reformat it as a table

Then (if you're really determined) you can just combine the two outputs as one formatted document.

Happy prompting!

promptengineering Article's
30 articles in total
Favicon
How RAG works? Retrieval Augmented Generation Explained
Favicon
How I Created & Published A Chrome Extension With AI?
Favicon
Temporary Chat Isn't That Temporary | A Look at The Custom Bio and User Instructions in ChatGPT
Favicon
Master Advanced Techniques in Prompt Engineering Today!
Favicon
Llama Classification Prompt Optimization Strategies Revealed
Favicon
Advanced Prompt Engineering Techniques for Foundation Models
Favicon
ChatGPT Prompts for Limitless Creativity and Productivity
Favicon
Comprehensive Guide to Few-Shot Prompting Using Llama 3
Favicon
Cracking the Code of AI Conversations: The Art of Prompt Engineering
Favicon
This One Weird Trick Makes AI Systems Smarter: Teaching Them to Doubt πŸ€–
Favicon
[Boost]
Favicon
Speeding up your GitHub workflow with Cline 3.0 and MCP
Favicon
AI Engineer's Tool Review: Athina
Favicon
How to Design Robust AI Systems Against Prompt Injection Attacks
Favicon
ChatGPT Prompts That Will Change Your Life in 2025
Favicon
Elevate Your Conversations with Awesome ChatGPT Prompts
Favicon
Masking confidential data in prompts using Regex and spaCy
Favicon
LaPrompt Marketplace: The #1 Resource of Verified GPT Prompts
Favicon
Supercharging AI Code Reviews: Our Journey with Mistral-Large-2411
Favicon
Improving LLM Code Generation with Prompt Engineering
Favicon
Prompting for purchasing: Shopping lists & evaluation matrixes (Part 2)
Favicon
AI Prompt Library
Favicon
How Smart Token Optimization Can Slash Your LLM Costs: A Prompt Engineering Guide
Favicon
AI Engineer's Review: Poe - Platform for accessing various AI models like Llama, GPT, Claude
Favicon
El arte de los prompts: Desglosando el diseΓ±o de Grok en X
Favicon
Taming the Cost of Prompt Chaining with GemBatch
Favicon
The Role of Writing Prompts in Streamlining Creative Processes
Favicon
chatGPT - C programming Linux Windows cross-platform - code review request
Favicon
Leveraging Multi-Prompt Segmentation: A Technique for Enhanced AI Output
Favicon
From Scribbles to Spells: Perfecting Instructions in Copilot Studio

Featured ones: