メインコンテンツにスキップ
検索

2025年4月23日

Prompt engineering is an emerging skill introduced to the public with the launch of ChatGPT by OpenAI in 20221. Although the hype around prompt engineering becoming the “it” career of the 21st century has died down, prompt engineering is still an essential skill for efficiently using GPTs (Generative Pre-trained Transformers) and LLMs (Large Language Models).

What is AI prompt engineering?

A prompt is a question, task, or other input given to an LLM to get a specific response from the model. Prompt engineering is the crafting of a prompt. It is both an art and a science, requiring creativity and precision.

Parts of a prompt

A prompt has four basic parts: Context, Data, Task, and Format. Depending on your use case, you may not need all the different parts in your prompt, nor do they have to be in any particular order. A key part of prompt engineering is figuring out which parts of a prompt are needed and crafting them to get the response you want.

Context

Background information, examples, or context that can guide the LLM to a better response.

Data

Optional, depending on the type of prompt. Provides any data or text that needs to be processed by the LLM.

Task

The specific instruction to the LLM. This should be concise and clear.

Format

The structure that you want the LLM to respond with. This can be as simple or as complicated as required.

Here is an example of a prompt with the different parts color-coded:

Classify the following sentences as Positive or Negative. Only respond with one word. Do not provide any explanation.

Example:
Text: The food at the restaurant was great.
Positive

Text: I thought the movie was too long

The task here is clearly stated and then formatting instructions are given so that the model will only respond with “Positive” or “Negative”. In this prompt an example is given to provide additional context, with the text for analysis following.

Note that providing an example to the LLM is a highly effective strategy for getting the model to perform the task correctly and to respond in the way that you want. This is called one-shot prompting and is a key tool for prompt engineers.

Basic prompt types

Prompt engineering can be used for a number of different tasks. The most common tasks are summarization, extraction, and question and answer. Each type of task requires a different type of prompt and often a different ordering of the basic prompt components described above.

The examples below are just two of the AI prompting techniques available.

Extraction

LLMs can be used to extract data from text. Using Certara’s tools, you can extract data from scientific papers, clinical studies, and other documents.

For an extraction-type prompt, you need to tell the model what to extract and where to extract it from.

Prompt:
Text: 10% of patients given the study drug had adverse effects. 5% of patients who had adverse effects were positive for anti-drug antibodies.
In the above text, what percent of patients had adverse effects?

Response:
According to the text, 10% of patients given the study drug had adverse effects.

There are several ways to improve the prompt. For example, if you include an additional formatting instruction to only return the percent, then the model would return only 10%.

Prompt:
Text: 10% of patients given the study drug had adverse effects. 5% of patients who had adverse effects were positive for anti-drug antibodies.
In the above text, what percent of patients had adverse effects? Only return the percentage. Do not include any explanatory or additional text.

Response:
10%

Question and answer

For Question and Answer-type tasks, you can either ask the LLM a question directly or you can ask it a question about some provided text or data. With the direct questioning, the LLM will try to answer based on the large set of data that it was trained on. This is good for general knowledge or questions about public data but is more prone to hallucination and responses should be verified.

For example, we can ask about the pharmaceutical industry relying on publicly available data:

Prompt:
What is the largest pharmaceutical company in Europe. Include public data such as market cap and revenue numbers for 2024. Be concise.

Response:

Based on publicly available data, here are the answers to your questions:

Roche Holding AG (ROG)
* Market Cap (2024): approximately €243 billion EUR
* Revenue (2024): approximately €63.8 billion EUR

Please note that market capitalization and revenue figures may fluctuate and are subject to change. These numbers are based on publicly available data as of 2024 and may not reflect the current market situation.

Sources:

* Market capitalization: Yahoo Finance, Bloomberg
* Revenue: Company reports, Statista

Tips and tricks for prompt engineering

If you’re just getting started with prompt engineering, here are a few simple things you can do to help craft an effective prompt.

Start small and work iteratively

You don’t need to craft the perfect prompt on your first try. Prompt engineering is naturally an iterative process, so it can help to start with a small prompt and then build on those results until you get the response you want.

It’s also important to consider if what you’re doing is a one-time query or if you will be doing the same thing multiple times. If you’ll be reusing the prompt over and over, then it is worth the extra effort to refine the prompt. If not, maybe your first or second try is just fine.

Constrain the Large Language Model

Using phrases like

Be precise.

and

Do not make things up if you don’t know. Say “I don’t know” instead.

can help limit the amount of hallucination.

Providing extra guidance can also help get you better responses. Instead of saying

Summarize this paper.

you may get a better response by being more specific, such as

You are a smart and friendly assistant that reads scientific papers on medicinal chemistry and clinical outcomes. Summarize the drug-drug interactions found in this paper. Put each drug-drug interaction found in a separate bullet point.

It can also be helpful to repeat an instruction to the LLM but phrase it differently. For example, instead of just saying

Respond with only one word.

You can say

Respond with only one word. Do not add any additional text.

Providing examples/one-shot

We saw an example above how providing an example of the output you are looking for, called one-shot prompting, will help the LLM. In this example, we are making a statement, but it is unclear, even to a human, what we are looking for as a response. The LLM does its best and comes up with an answer:

Prompt:
The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.

Response:

The odd numbers in the group are 15, 5, 13, 7, and 1.

By providing the LLM with an example of the output we are looking for, the LLM is able to come up with the correct response:

Prompt:
The odd numbers in this group add up to an even number: 8, 5, 3, 2, 5, 1.
A:True. The odd numbers are 5, 3, 5, and 1. They add up to 14, which is an even number.

The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.
A:

Response:

False. The odd numbers are 15, 5, 13, 7, and 1. They add up to 41, which is an odd number.

For more complex prompts, it may be useful to provide multiple examples, such as a True and False case. This is called few-shot prompting, but the idea is the same as one-shot prompting.

Additional resources

You can find additional prompt engineering guidance on Certara.AI’s developers pages.

Certara’s AI solutions

Implementing and using AI has become a top priority for many companies. Some estimates say that there is between $5-7 billion worth of value to be unlocked by AI in the life sciences2. Despite this investment, 74% of companies struggle to achieve their AI goals3.

Building AI tools since 2017, Certara can help you and your company achieve its AI goals and master Prompt Engineering. Certara.AI’s CoAuthor provides an interactive environment for you to test your prompts and have the LLM extract, summarize, and glean insight from documents and scientific data.

CoAuthor is a tool designed with Medical Writers in mind. It comes with a library of pre-designed prompts to help write reports for regulatory submissions, but you can also write your own prompts and build your own prompt library.

Ian Kerman

Data Science and AI Client Solutions Architect

Ian Kerman is a Data Science and AI Client Solutions Architect at Certara, where he leads initiatives at the intersection of artificial intelligence, data science, and life sciences. With over 15 years of industry experience, Ian brings deep expertise in machine learning, MLOps, and scientific informatics, helping life sciences organizations translate complex data into actionable insights. At Certara, he spearheads advanced R&D efforts in large language models, user experience design, and integrated biological and chemical data knowledge systems.

Before joining Certara, Ian held leadership roles at LabVoice and BIOVIA (Dassault Systèmes), where he led cross-functional teams to deliver AI-powered solutions, voice-enabled lab assistants, and custom data platforms for pharma and biotech customers. His work has contributed to innovations in computational drug discovery, antibody developability prediction, and laboratory automation.

Ian is also an experienced educator and advocate for scientific collaboration. He has developed and delivered technical training programs, mentored students on AI-focused research projects, and co-founded the Data Science and AI Topical Interest Group with the Society for Laboratory Automation and Screening (SLAS). A frequent speaker at industry conferences, Ian combines technical depth with a passion for advancing AI in the life sciences.

Ian earned an M.S. in Computer Science, focusing on Machine Learning, from the Georgia Institute of Technology and an M.S. in Biology, alongside undergraduate degrees in Bioinformatics and Molecular Biology from the University of California, San Diego.

参照文献

1 https://openai.com/index/chatgpt/

2 https://www2.deloitte.com/us/en/pages/life-sciences-and-health-care/articles/value-of-genai-in-pharma.html

3 https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value

CoAuthorによる規制文書作成の未来を体験しましょう

CoAuthorが規制文書作成プロセスをどのように革新できるか、無料のデモでぜひご体験ください。

生成AIで申請スケジュールを短縮
組み込み型QAツールでコンプライアンスと一貫性を確保
組織専用の安全なAIで安心して運用


Powered by Translations.com GlobalLink Web Software