- Solutions
- AI Accelerators
- GenAI: Conversational Agent on Medical Research Papers
GenAI: Conversational Agent on Medical Research Papers
With DataRobot, we will show how to use Predictive Modeling to identify trusted research and then build a knowledge base for the conversational agent using the DataRobot Generative AI offering.
Build with Free TrialMedical professionals have to constantly stay informed of the latest research in the field, including domains beyond their specialization. Considering the rate at which research publications flood the internet, it becomes tough for medical professionals to ramp up the production of trusted and approved research papers. Access to trusted repositories helps them, but there are many sources like Nature, PubMed, Assorted Journals which also publish a lot of work. Having a knowledge system that curates trusted papers and then allows fast retrieval with a Question and Answer agent immensely simplifies the medical professionals’ knowledge initiatives.
Another key point to note is that an LLM can hallucinate and provide answers to a question. A knowledge base provides contextual data for the LLM to ground itself and prevents hallucination. Additionally, the knowledge base provides the LLM with information that it has not been trained on.
This accelerator aims to provide instructions on how to build this type of system using DataRobot’s generative AI solution framework. The accelerator shows how you can build a pipeline to create a knowledge base with only trusted research papers, and build a conversational agent that can answer questions from medical professionals.
Setup
Before proceeding with running this notebook, review the following steps.
- Enable the following feature flags for your DataRobot account:
- Enable Notebooks Filesystem Management
- Enable Proxy models
- Enable Public Network Access for all Custom Models
- Enable the Injection of Runtime Parameters for Custom Models
- Enable Monitoring Support for Generative Models
- Enable Custom Inference Models
- Enable the notebook filesystem for this notebook in the notebook sidebar.
- Add the notebook environment variables
OPENAI_API_KEY
,OPENAI_ORGANIZATION
, andOPENAI_API_BASE
. Set the values with your Azure OpenAI credentials. - Set the notebook session timeout to 180 minutes.
- Restart the notebook container using at least a “Medium” (16GB RAM) instance.
- Upload your documents archive to the notebook.
In[ ]:
try:
import os
assert "OPENAI_API_KEY" in os.environ
assert "OPENAI_ORGANIZATION" in os.environ
assert "OPENAI_API_BASE" in os.environ
except Exception as e:
raise RuntimeError(
"Please follow the setup steps before running the notebook."
) from e
Install libraries
The accelerator uses Langchain for developing the agent, and FAISS and Sentence Transformers for the RAG system. The LLM is an OpenAI model hosted on Azure. DataRobot provides the freedom for you to use your preferred components in the stack.
In[ ]:
!pip install "langchain==0.0.244" \
"faiss-cpu==1.7.4" \
"sentence-transformers==2.2.2" \
"unstructured==0.8.4" \
"openai==0.27.8" \
"datarobotx==0.1.14"
In[ ]:
!pip install datarobotx[llm] json2html
Document corpus
The cells below contain the corpus of both trusted and non-trusted medical research abstracts. These will simulate the real-world documents that need to be processed and added to the agent’s knowledge base. This dataset is sourced from Kaggle. This demo uses a subset of the papers to help you run the notebook quickly. You can find the files.zip file here.
In[ ]:
import shutil
import requests
r = requests.get(
"https://s3.amazonaws.com/datarobot_public_datasets/ai_accelerators/medical_agent/files.zip",
allow_redirects=True,
)
open("/home/notebooks/storage/files.zip", "wb").write(r.content)
shutil.unpack_archive(
"/home/notebooks/storage/files.zip", "/home/notebooks/storage/", "zip"
)
In[ ]:
import os
len(os.listdir("/home/notebooks/storage/files/"))
Out[ ]:
2500
Trusted research papers
As the aim of this accelerator is to only include trusted papers into the knowledge base, this workflow defines a function to check if the paper can be trusted or not. In this accelerator, you are building a DataRobot AutoML predictive model to predict if the research paper trust level is high or not. Using DataRobot and DataRobotX APIs it is easy to build and deploy this model. You can find the dataset medical_papers_trust_scoring.csv
here.
In[ ]:
import time
import datarobotx as drx
import pandas as pd
from sklearn.model_selection import train_test_split
# Initialize Client if running this notebook out of DataRobot platform
# drx.Client()
df = pd.read_csv(
"https://s3.amazonaws.com/datarobot_public_datasets/ai_accelerators/medical_agent/medical_papers_trust_scoring.csv"
)
df_train, df_test = train_test_split(df, test_size=0.4, random_state=42)
model = drx.AutoMLModel()
model.fit(df_train, target="trust")
deployment = model.deploy(wait_for_autopilot=True)
In[ ]:
predictions = deployment.predict(df_test)
df_test["predictions"] = predictions.prediction.values
predictions.info()
Out[ ]:
# Waiting for deployment to be initialized... - Initializing model for prediction explanations...
Out[ ]:
- Awaiting deployment creation...
# Making predictions
- Making predictions with deployment
[dreamy-torvalds](https://app.datarobot.com/deployments/64f09dc8b4c00219dbd2a1a2/overview)
- Uploading dataset to be scored...
- Created deployment
[dreamy-torvalds](https://app.datarobot.com/deployments/64f09dc8b4c00219dbd2a1a2/overview)
from model [Elastic-Net Classifier with Naive Bayes Feature Weighting
(L2)](https://app.datarobot.com/projects/64f09b83d9e7c11a47b50649/models/64f09d3ea2ddf9b2a314de95/blueprint)
in project
[sad-boyd](https://app.datarobot.com/projects/64f09b83d9e7c11a47b50649/eda)
# Deployment complete
100%|██████████████████████████████████| 2.00M/2.00M [00:00<00:00, 32.9MB/s]
- Scoring...
# Predictions complete
<class 'datarobotx.common.utils.FutureDataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 prediction 1000 non-null object
dtypes: object(1)
memory usage: 7.9+ KB
In[ ]:
%%time
def get_paper_trust_level(file_path):
file_paper = open(file_path, "r+")
paper_content = file_paper.read()
file_paper.close()
pred = deployment.predict(
pd.DataFrame({"abstract": [paper_content]}), wait_for_autopilot=True
)
return pred["prediction"].iloc[0]
print(
"Trust level for paper # 24219891 ",
get_paper_trust_level("/home/notebooks/storage/files/24219891.txt"),
)
print(
"Trust level for paper # 24229754 ",
get_paper_trust_level("/home/notebooks/storage/files/24229754.txt"),
)
Out[ ]:
# Making predictions
- Making predictions with deployment
[dreamy-torvalds](https://app.datarobot.com/deployments/64f09dc8b4c00219dbd2a1a2/overview)
- Uploading dataset to be scored...
100%|██████████████████| 1.81k/1.81k [00:00<00:00, 108kB/s]
- Scoring...
# Predictions complete
Trust level for paper # 24219891 low
# Making predictions
- Making predictions with deployment
[dreamy-torvalds](https://app.datarobot.com/deployments/64f09dc8b4c00219dbd2a1a2/overview)
- Uploading dataset to be scored...
100%|██████████████████| 1.89k/1.89k [00:00<00:00, 101kB/s]
- Scoring...
# Predictions complete
Trust level for paper # 24229754 high
CPU times: user 76.3 ms, sys: 21 ms, total: 97.2 ms
Wall time: 797 ms
Load and split text
If you are applying this recipe to a different use case, consider the following:
- Use additional or alternative document loaders.
- Filter out extraneous or noisy documents.
- Choose an appropriate
chunk_size
andoverlap
. These are counted by number of characters, NOT tokens.
In[ ]:
import re
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import MarkdownTextSplitter, RecursiveCharacterTextSplitter
SOURCE_DOCUMENTS_DIR = "/home/notebooks/storage/files/"
SOURCE_DOCUMENTS_FILTER = "*.txt"
loader = DirectoryLoader(f"{SOURCE_DOCUMENTS_DIR}", glob=SOURCE_DOCUMENTS_FILTER)
splitter = RecursiveCharacterTextSplitter(
chunk_size=2000,
chunk_overlap=1000,
)
print(f"Loading {SOURCE_DOCUMENTS_DIR} directory")
data = loader.load()
print(f"Splitting {len(data)} documents")
docs = splitter.split_documents(data)
print(f"Created {len(docs)} documents")
Out[ ]:
Loading /home/notebooks/storage/files/ directory
[nltk_data] Downloading package punkt to /home/notebooks/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /home/notebooks/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
Out[ ]:
Splitting 2500 documents
Created 3474 documents
Filtration
This cell filters only trusted papers to be loaded to the knowledge base.
In[ ]:
from tqdm import tqdm
approved_docs = []
for i in tqdm(range(len(docs))):
if (
docs[i].metadata["source"].split("/")[-1]
in df_test[df_test.predictions == "high"]["filename"].tolist()
):
approved_docs.append(docs[i])
len(approved_docs)
Out[ ]:
100%|██████████| 3474/3474 [00:01<00:00, 2997.29it/s]
254
In[ ]:
approved_docs[0]
Out[ ]:
Document(page_content="24432712 BACKGROUND\tThe EXAcerbations of Chronic Pulmonary Disease Tool ( EXACT ) is a patient-reported outcome measure to standardize the symptomatic assessment of chronic obstructive pulmonary disease exacerbations , including reported and unreported events . BACKGROUND\tThe instrument has been validated in a short-term study of patients with acute exacerbation and stable disease ; its performance in longer-term studies has not been assessed . OBJECTIVE\tTo test the EXACT 's performance in three randomized controlled trials and describe the relationship between resource-defined medically treated exacerbations ( MTEs ) and symptom ( EXACT ) - defined events . METHODS\tPrespecified secondary analyses of data from phase II randomized controlled trials testing new drugs for the management of chronic obstructive pulmonary disease : one 6-month trial ( United States ) ( n = 235 ) and two 3-month , multinational trials ( AZ 1 [ n = 749 ] , AZ 2 [ n = 597 ] ) . METHODS\tIn each case , the experimental drugs were found to be ineffective , permitting assessment of the EXACT 's performance in three independent studies of moderate to severe high-risk patients on maintenance therapies . RESULTS\tThe mean age of subjects was 62 to 64 years ; 48 to 76 % were male . RESULTS\tMean FEV1 % predicted was 42 to 59 % .", metadata={'source': '/home/notebooks/storage/files/24432712.txt'})
Create a vector database from the documents
- This notebook uses FAISS, an open source, in-memory vector store that can be serialized and loaded to disk.
- The notebook uses the open source HuggingFace
all-MiniLM-L6-v2
embeddings model. Users are free to experiment with other embedding models.
In[ ]:
from langchain.docstore.document import Document
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.vectorstores.faiss import FAISS
import torch
if not torch.cuda.is_available():
EMBEDDING_MODEL_NAME = "all-MiniLM-L6-v2"
else:
EMBEDDING_MODEL_NAME = "all-mpnet-base-v2"
# Will download the model the first time it runs
embedding_function = SentenceTransformerEmbeddings(
model_name=EMBEDDING_MODEL_NAME,
cache_folder="storage/deploy/sentencetransformers",
)
try:
# Load existing db from disk if previously built
db = FAISS.load_local("storage/deploy/faiss-db", embedding_function)
except:
texts = [doc.page_content for doc in approved_docs]
metadatas = [doc.metadata for doc in approved_docs]
# Build and save the FAISS db to persistent notebook storage; this can take some time w/o GPUs
db = FAISS.from_texts(texts, embedding_function, metadatas=metadatas)
db.save_local("storage/deploy/faiss-db")
print(f"FAISS VectorDB has {db.index.ntotal} documents")
Perform sanity tests on the vector database
Test the vector database retrieval of relevant information for your RAG.
In[ ]:
# Test the database
# db.similarity_search("Find papers around obesity")
db.similarity_search(
"Can antioxidants impact exercise performance in normobaric hypoxia"
)
# db.max_marginal_relevance_search("How do I replace a custom model on an existing custom environment?")
Define hooks for deploying an unstructured custom model
Deploying unstructured custom models in DataRobot requires two hooks, load_model
and score_unstructured
. These hooks help DataRobot understand the model structure, inputs, outputs, and monitors. More information is available here.
In[ ]:
import os
OPENAI_API_BASE = os.environ["OPENAI_API_BASE"]
OPENAI_ORGANIZATION = os.environ["OPENAI_ORGANIZATION"]
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
OPENAI_API_TYPE = os.environ["OPENAI_API_TYPE"]
OPENAI_API_VERSION = os.environ["OPENAI_API_VERSION"]
OPENAI_DEPLOYMENT_NAME = os.environ["OPENAI_DEPLOYMENT_NAME"]
def load_model(input_dir):
"""Custom model hook for loading our knowledge base."""
import os
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.vectorstores.faiss import FAISS
os.environ["OPENAI_API_TYPE"] = OPENAI_API_TYPE
os.environ["OPENAI_API_BASE"] = OPENAI_API_BASE
embedding_function = SentenceTransformerEmbeddings(
model_name=EMBEDDING_MODEL_NAME,
cache_folder=input_dir + "/" + "storage/deploy/sentencetransformers",
)
db = FAISS.load_local(
input_dir + "/" + "storage/deploy/faiss-db", embedding_function
)
return OPENAI_DEPLOYMENT_NAME, db
def score_unstructured(model, data, query, **kwargs) -> str:
"""Custom model hook for making completions with our knowledge base.
When requesting predictions from the deployment, pass a dictionary
with the following keys:
- 'question' the question to be passed to the retrieval chain
- 'openai_api_key' the openai token to be used
- 'chat_history' (optional) a list of two-element lists corresponding to
preceding dialogue between the Human and AI, respectively
datarobot-user-models (DRUM) handles loading the model and calling
this function with the appropriate parameters.
Returns:
--------
rv : str
Json dictionary with keys:
- 'question' user's original question
- 'chat_history' chat history that was provided with the original question
- 'answer' the generated answer to the question
- 'references' list of references that were used to generate the answer
- 'error' - error message if exception in handling request
"""
import json
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import AzureChatOpenAI
from langchain.vectorstores.base import VectorStoreRetriever
try:
deployment_name, db = model
data_dict = json.loads(data)
llm = AzureChatOpenAI(
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_type=OPENAI_API_TYPE,
openai_api_base=OPENAI_API_BASE,
openai_api_version=OPENAI_API_VERSION,
openai_api_key=data_dict["openai_api_key"],
openai_organization=OPENAI_ORGANIZATION,
model_name=OPENAI_DEPLOYMENT_NAME,
temperature=0,
verbose=True,
)
retriever = VectorStoreRetriever(
vectorstore=db,
# search_kwargs={"filter": {"trust_level": "high"}}
)
chain = ConversationalRetrievalChain.from_llm(
llm, retriever=retriever, return_source_documents=True
)
if "chat_history" in data_dict:
chat_history = [
(
human,
ai,
)
for human, ai in data_dict["chat_history"]
]
else:
chat_history = []
rv = chain(
inputs={
"question": data_dict["question"],
"chat_history": chat_history,
},
)
rv["references"] = [
doc.metadata["source"] for doc in rv.pop("source_documents")
]
except Exception as e:
rv = {"error": f"{e.__class__.__name__}: {str(e)}"}
return json.dumps(rv)
Examples
Here are some examples of the agent answering questions using the research papers as context.
In[ ]:
import json
import warnings
from json2html import *
warnings.filterwarnings("ignore")
def get_completion(question):
output = score_unstructured(
load_model("."),
json.dumps(
{
"question": question,
"openai_api_key": os.environ["OPENAI_API_KEY"],
}
),
None,
)
output = json.loads(output)
output_cleaned = {
"question": output["question"],
"answer": output["answer"],
"references": [
(open(file, "r")).read()[0:300].replace("\t", " ").replace("\n", " ")
+ "...."
for file in output["references"]
],
}
html_ = json2html.convert(json=output_cleaned)
return html_
In[ ]:
from IPython.core.display import display, HTML
question = "How to treat obesity? Please provide conclusions from papers where the methodology is robust."
display(HTML(get_completion(question)))
Out[ ]:
question
How to treat obesity? Please provide conclusions from papers where the methodology is robust.
answer
Based on the provided context, here are the conclusions from the papers that have robust methodologies: 1. In a study comparing different interventions for overweight or obese adults with prediabetes and/or metabolic syndrome, it was found that baseline obesity severity may influence the effectiveness of lifestyle interventions. Participants with a baseline BMI of 35 or higher had greater reductions in BMI, body weight, and waist circumference in a coach-led group intervention compared to usual care and self-directed individual intervention. On the other hand, the self-directed intervention was more effective than usual care only among participants with baseline BMIs between 25 and 35. [Study: 24369008] 2. A randomized controlled trial involving obese patients with uncontrolled type 2 diabetes compared intensive medical therapy alone to intensive medical therapy plus bariatric surgery (gastric bypass or sleeve gastrectomy). After 3 years, the surgical groups had significantly better glycemic control, with a glycated hemoglobin level of 6.0% or less achieved by 38% of the gastric-bypass group and 24% of the sleeve-gastrectomy group, compared to only 5% in the medical-therapy group. The surgical groups also had greater reductions in weight and use of glucose-lowering medications. [Study: 24679060] Please note that these conclusions are specific to the provided papers and may not encompass all possible treatments for obesity. It is always recommended to consult with a healthcare professional for personalized advice and treatment options.
references
- 24679060 BACKGROUND In short-term randomized trials ( duration , 1 to 2 years ) , bariatric surgery has been associated with improvement in type 2 diabetes mellitus . METHODS We assessed outcomes 3 years after the randomization of 150 obese patients with uncontrolled type 2 diabetes to receive eithe....
- 24369008 OBJECTIVE To examine whether baseline obesity severity modifies the effects of two different , primary care-based , technology-enhanced lifestyle interventions among overweight or obese adults with prediabetes and/or metabolic syndrome . METHODS We compared mean differences in changes from ....
- 24679060 BACKGROUND In short-term randomized trials ( duration , 1 to 2 years ) , bariatric surgery has been associated with improvement in type 2 diabetes mellitus . METHODS We assessed outcomes 3 years after the randomization of 150 obese patients with uncontrolled type 2 diabetes to receive eithe....
- 24754911 BACKGROUND The Canola Oil Multicenter Intervention Trial ( COMIT ) was a randomized controlled crossover study designed to evaluate the effects of five diets that provided different oils and/or oil blends on cardiovascular disease ( CVD ) risk factors in individuals with abdominal obesity .....
Adversarial example
Here is an example where the knowledge base doesn’t have the required information for the agent. This means that there is no trusted paper yet included in the knowledge base. With the combination of Temperature and the Knowledge Base, you can keep the agent under checks and balances and avoid hallucinations.
In[ ]:
question = "What are the effective treatments for rheumatoid arthritis? Please provide \
conclusions from papers where the methodology is robust."
display(HTML(get_completion(question)))
Out[ ]:
question
Can high sweetener intake worsen pathogenesis of cardiometabolic disorders?
answer I don't have any information on the specific effects of high sweetener intake on the pathogenesis of cardiometabolic disorders.
references
- 25319187 BACKGROUND Whether the type of dietary fat could alter cardiometabolic responses to a hypercaloric diet is unknown . BACKGROUND In addition , subclinical cardiometabolic consequences of moderate weight gain require further study . RESULTS In a 7-week , double-blind , parallel-group , random....
- 24980134 BACKGROUND Managing cardiovascular risk factors is important for reducing vascular complications in type 2 diabetes , even in individuals who have achieved glycemic control . BACKGROUND Nut consumption is associated with reduced cardiovascular risk ; however , there is mixed evidence about ....
- 24284442 BACKGROUND Leucine is a key amino acid involved in the regulation of skeletal muscle protein synthesis . OBJECTIVE We assessed the effect of the supplementation of a lower-protein mixed macronutrient beverage with varying doses of leucine or a mixture of branched chain amino acids ( BCAAs )....
- 25833983 BACKGROUND Abdominal obesity and exaggerated postprandial lipemia are independent risk factors for cardiovascular disease ( CVD ) and mortality , and both are affected by dietary behavior . OBJECTIVE We investigated whether dietary supplementation with whey protein and medium-chain saturate....
Add new papers into the knowledge base
You can add a paper to the knowledge base on the above topic to see what happens. Langchain provides hooks to add new documents to the vector database index.
In[ ]:
SOURCE_DOCUMENTS_DIR = "/home/notebooks/storage/files/"
SOURCE_DOCUMENTS_FILTER = "24219891.txt"
loader = DirectoryLoader(f"{SOURCE_DOCUMENTS_DIR}", glob=SOURCE_DOCUMENTS_FILTER)
print(f"Loading {SOURCE_DOCUMENTS_DIR} directory")
data = loader.load()
print(f"Splitting {len(data)} documents")
docs = splitter.split_documents(data)
print(f"Created {len(docs)} documents")
for i in tqdm(range(len(docs))):
docs[i].metadata["trust_level"] = "high"
texts = [doc.page_content for doc in docs]
metadatas = [doc.metadata for doc in docs]
db.add_texts(texts, metadatas)
db.save_local("storage/deploy/faiss-db")
print(f"\n FAISS VectorDB has {db.index.ntotal} documents")
Out[ ]:
Loading /home/notebooks/storage/files/ directory
Splitting 1 documents
Created 1 documents
100%|██████████| 1/1 [00:00<00:00, 17119.61it/s]
FAISS VectorDB has 255 documents
The agent now has the context to answer the question with the trusted paper that you just added to the knowledge base.
In[ ]:
question = "Can high sweetener intake worsen pathogenesis of cardiometabolic disorders?"
display(HTML(get_completion(question)))
Out[ ]:
question
Can high sweetener intake worsen pathogenesis of cardiometabolic disorders?
answer
Yes, high intake of added sweeteners, especially high-fructose intake, is considered to have a causal role in the pathogenesis of cardiometabolic disorders. It may not only cause weight gain but also low-grade inflammation, which is an independent risk factor for developing type 2 diabetes and cardiovascular disease.
references
- 24219891 OBJECTIVE High intake of added sweeteners is considered to have a causal role in the pathogenesis of cardiometabolic disorders . OBJECTIVE Especially , high-fructose intake is regarded as potentially harmful to cardiometabolic health . OBJECTIVE It may cause not only weight gain but also lo....
- 25319187 BACKGROUND Whether the type of dietary fat could alter cardiometabolic responses to a hypercaloric diet is unknown . BACKGROUND In addition , subclinical cardiometabolic consequences of moderate weight gain require further study . RESULTS In a 7-week , double-blind , parallel-group , random....
- 24980134 BACKGROUND Managing cardiovascular risk factors is important for reducing vascular complications in type 2 diabetes , even in individuals who have achieved glycemic control . BACKGROUND Nut consumption is associated with reduced cardiovascular risk ; however , there is mixed evidence about ....
- 24284442 BACKGROUND Leucine is a key amino acid involved in the regulation of skeletal muscle protein synthesis . OBJECTIVE We assessed the effect of the supplementation of a lower-protein mixed macronutrient beverage with varying doses of leucine or a mixture of branched chain amino acids ( BCAAs )....
Deploy the knowledge base
The convenience method outlined in the cell below does the following:
- Builds a new custom model environment containing the contents of storage.
- Assembles a new custom model with the provided hooks.
- Deploys an unstructured custom model.
- Returns an object which can be used to make predictions.
Use environment_id
to re-use an existing custom model environment that you are happy with for shorter iteration cycles on the custom model hooks.
In[ ]:
import datarobotx as drx
deployment = drx.deploy(
"storage/deploy/",
name="Medical Research Papers redux",
hooks={"score_unstructured": score_unstructured, "load_model": load_model},
extra_requirements=["langchain", "faiss-cpu", "sentence-transformers", "openai"],
# Re-use existing environment if you want to change the hook code,
# and not requirements
# environment_id="646e81c124b3abadc7c66da0",
)
# Enable storing prediction data, necessary for Data Export for monitoring purposes
deployment.dr_deployment.update_predictions_data_collection_settings(enabled=True)
Out[ ]:
# Deploying custom model
- Unable to auto-detect model type; any provided paths and files will be
exported - dependencies should be explicitly specified using
extra_requirements
- Preparing model and environment...
- Configured environment [[Custom] Medical Research Papers
redux](https://app.datarobot.com/model-registry/custom-environments/64edfad0abee78c9e6b9dc45)
with requirements:
python 3.9.16
datarobot-drum==1.10.3
datarobot-mlops==8.2.7
cloudpickle>=2.0.0
langchain==0.0.244
faiss-cpu==1.7.4
sentence-transformers==2.2.2
openai==0.27.8
- Awaiting custom environment build...
Out[ ]:
- Configuring and uploading custom model...
100%|███████████████████████████| 92.4M/92.4M [00:00<00:00, 240MB/s]
Out[ ]:
- Registered custom model [Medical Research Papers
redux](https://app.datarobot.com/model-registry/custom-models/64ee013fb4482185322c1375/info)
with target type: Unstructured
- Creating and deploying model package...
Out[ ]:
- Created deployment [Medical Research Papers
redux](https://app.datarobot.com/deployments/64ee0150da79fc4182e4e537/overview)
# Custom model deployment complete
In[ ]:
# Test the deployment
deployment.predict_unstructured(
{
"question": "Can high sweetener intake worsen pathogenesis of cardiometabolic disorders?",
"openai_api_key": os.environ["OPENAI_API_KEY"],
}
)
Out[ ]:
# Making predictions
- Making predictions with deployment [Medical Research Papers
redux](https://app.datarobot.com/deployments/64ee0150da79fc4182e4e537/overview)
Out[ ]:
# Predictions complete
{'question': 'Can high sweetener intake worsen pathogenesis of cardiometabolic disorders?',
'chat_history': [],
'answer': 'Yes, high intake of added sweeteners, especially high-fructose intake, is considered to have a causal role in the pathogenesis of cardiometabolic disorders. It may not only cause weight gain but also low-grade inflammation, which is an independent risk factor for developing type 2 diabetes and cardiovascular disease.',
'references': ['/home/notebooks/storage/files/24219891.txt',
'/home/notebooks/storage/files/25319187.txt',
'/home/notebooks/storage/files/24980134.txt',
'/home/notebooks/storage/files/24284442.txt']}
Conclusion
In this accelerator, you have observed how to:
- Use predictive models to classify text files.
- Create a vector store out of research paper abstracts.
- Use Retrieval Augmented Generation with a generative AI model.
- Deploy a generative AI model to the DataRobot platform.
- Create a conversational agent that can be used by healthcare professionals.
Experience new features and capabilities previously only available in our full AI Platform product.
Get Started with This Conversational Agent
Explore more AI Accelerators
-
HorizontalObject Classification on Video with DataRobot Visual AI
This AI Accelerator demonstrates how deep learning model trained and deployed with DataRobot platform can be used for object detection on the video stream (detection if person in front of camera wears glasses).
Learn More -
HorizontalPrediction Intervals via Conformal Inference
This AI Accelerator demonstrates various ways for generating prediction intervals for any DataRobot model. The methods presented here are rooted in the area of conformal inference (also known as conformal prediction).
Learn More -
HorizontalReinforcement Learning in DataRobot
In this notebook, we implement a very simple model based on the Q-learning algorithm. This notebook is intended to show a basic form of RL that doesn't require a deep understanding of neural networks or advanced mathematics and how one might deploy such a model in DataRobot.
Learn More -
HorizontalDimensionality Reduction in DataRobot Using t-SNE
t-SNE (t-Distributed Stochastic Neighbor Embedding) is a powerful technique for dimensionality reduction that can effectively visualize high-dimensional data in a lower-dimensional space.
Learn More
-
HorizontalObject Classification on Video with DataRobot Visual AI
This AI Accelerator demonstrates how deep learning model trained and deployed with DataRobot platform can be used for object detection on the video stream (detection if person in front of camera wears glasses).
Learn More -
HorizontalPrediction Intervals via Conformal Inference
This AI Accelerator demonstrates various ways for generating prediction intervals for any DataRobot model. The methods presented here are rooted in the area of conformal inference (also known as conformal prediction).
Learn More -
HorizontalReinforcement Learning in DataRobot
In this notebook, we implement a very simple model based on the Q-learning algorithm. This notebook is intended to show a basic form of RL that doesn't require a deep understanding of neural networks or advanced mathematics and how one might deploy such a model in DataRobot.
Learn More -
HorizontalDimensionality Reduction in DataRobot Using t-SNE
t-SNE (t-Distributed Stochastic Neighbor Embedding) is a powerful technique for dimensionality reduction that can effectively visualize high-dimensional data in a lower-dimensional space.
Learn More