日本黄色一级经典视频|伊人久久精品视频|亚洲黄色色周成人视频九九九|av免费网址黄色小短片|黄色Av无码亚洲成年人|亚洲1区2区3区无码|真人黄片免费观看|无码一级小说欧美日免费三级|日韩中文字幕91在线看|精品久久久无码中文字幕边打电话

當(dāng)前位置:首頁 > 物聯(lián)網(wǎng) > 智能應(yīng)用
[導(dǎo)讀]通常,我們開發(fā)基于 LLM 的檢索應(yīng)用程序的知識庫包含大量各種格式的數(shù)據(jù)。為了向LLM提供最相關(guān)的上下文來回答知識庫中特定部分的問題,我們依賴于對知識庫中的文本進行分塊并將其放在方便的位置。

通常,我們開發(fā)基于 LLM 的檢索應(yīng)用程序的知識庫包含大量各種格式的數(shù)據(jù)。為了向LLM提供最相關(guān)的上下文來回答知識庫中特定部分的問題,我們依賴于對知識庫中的文本進行分塊并將其放在方便的位置。

分塊

分塊是將文本分割成有意義的單元以改進信息檢索的過程。通過確保每個塊代表一個集中的想法或觀點,分塊有助于保持內(nèi)容的上下文完整性。

在本文中,我們將討論分塊的三個方面:

· 糟糕的分塊如何導(dǎo)致結(jié)果相關(guān)性降低

· 良好的分塊如何帶來更好的結(jié)果

· 如何通過元數(shù)據(jù)進行良好的分塊,從而獲得具有良好語境的結(jié)果

為了有效地展示分塊的重要性,我們將采用同一段文本,對其應(yīng)用 3 種不同的分塊方法,并檢查如何根據(jù)查詢檢索信息。

分塊并存儲至 Qdrant

讓我們看看下面的代碼,它展示了對同一文本進行分塊的三種不同方法。

Python

import qdrant_client

from qdrant_client.models import PointStruct, Distance, VectorParams

import openai

import yaml

# Load configuration

with open('config.yaml', 'r') as file:

config = yaml.safe_load(file)

# Initialize Qdrant client

client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])

# Initialize OpenAI with the API key

openai.api_key = config['openai']['api_key']

def embed_text(text):

print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded

response = openai.embeddings.create(

input=[text], # Input needs to be a list

model=config['openai']['model_name']

)

embedding = response.data[0].embedding # Access using the attribute, not as a dictionary

print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation

return embedding

# Function to create a collection if it doesn't exist

def create_collection_if_not_exists(collection_name, vector_size):

collections = client.get_collections().collections

if collection_name not in [collection.name for collection in collections]:

client.create_collection(

collection_name=collection_name,

vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)

)

print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation

else:

print(f"Collection {collection_name} already exists.") # Collection existence check

# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.

text = """

Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

"""

# Poor Chunking Strategy

def poor_chunking(text, chunk_size=40):

chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]

print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced

return chunks

# Good Chunking Strategy

def good_chunking(text):

import re

sentences = re.split(r'(?<=[.!?]) +', text)

print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced

return sentences

# Good Chunking with Metadata

def good_chunking_with_metadata(text):

chunks = good_chunking(text)

metadata_chunks = []

for chunk in chunks:

if "healthcare" in chunk:

metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})

elif "ethical implications" in chunk or "data privacy" in chunk:

metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})

else:

metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})

print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced

return metadata_chunks

# Store chunks in Qdrant

def store_chunks(chunks, collection_name):

if len(chunks) == 0:

print(f"No chunks were generated for the collection '{collection_name}'.")

return

# Generate embedding for the first chunk to determine vector size

sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]

sample_embedding = embed_text(sample_text)

vector_size = len(sample_embedding)

create_collection_if_not_exists(collection_name, vector_size)

for idx, chunk in enumerate(chunks):

text = chunk if isinstance(chunk, str) else chunk["text"]

embedding = embed_text(text)

payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload

client.upsert(collection_name=collection_name, points=[

PointStruct(id=idx, vector=embedding, payload=payload)

])

print(f"Chunks successfully stored in the collection '{collection_name}'.")

# Execute chunking and storing separately for each strategy

print("Starting poor_chunking...")

store_chunks(poor_chunking(text), "poor_chunking")

print("Starting good_chunking...")

store_chunks(good_chunking(text), "good_chunking")

print("Starting good_chunking_with_metadata...")

store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")

上面的代碼執(zhí)行以下操作:

· embed_text方法接收文本,使用 OpenAI 嵌入模型生成嵌入,并返回生成的嵌入。

· 初始化用于分塊和后續(xù)內(nèi)容檢索的文本字符串

· 糟糕的分塊策略: 將文本分成每 40 個字符的塊

· 良好的分塊策略:根據(jù)句子拆分文本以獲得更有意義的上下文

· 具有元數(shù)據(jù)的良好分塊策略:向句子級塊添加適當(dāng)?shù)脑獢?shù)據(jù)

· 一旦為塊生成了嵌入,它們就會存儲在 Qdrant Cloud 中相應(yīng)的集合中。

請記住,創(chuàng)建不良分塊只是為了展示不良分塊如何影響檢索。

下面是來自 Qdrant Cloud 的塊的屏幕截圖,您可以看到元數(shù)據(jù)被添加到句子級塊中以指示來源和主題。

基于分塊策略的檢索結(jié)果

現(xiàn)在讓我們編寫一些代碼來根據(jù)查詢從 Qdrant Vector DB 中檢索內(nèi)容。

Python

import qdrant_client

from qdrant_client.models import PointStruct, Distance, VectorParams

import openai

import yaml

# Load configuration

with open('config.yaml', 'r') as file:

config = yaml.safe_load(file)

# Initialize Qdrant client

client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])

# Initialize OpenAI with the API key

openai.api_key = config['openai']['api_key']

def embed_text(text):

print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded

response = openai.embeddings.create(

input=[text], # Input needs to be a list

model=config['openai']['model_name']

)

embedding = response.data[0].embedding # Access using the attribute, not as a dictionary

print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation

return embedding

# Function to create a collection if it doesn't exist

def create_collection_if_not_exists(collection_name, vector_size):

collections = client.get_collections().collections

if collection_name not in [collection.name for collection in collections]:

client.create_collection(

collection_name=collection_name,

vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)

)

print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation

else:

print(f"Collection {collection_name} already exists.") # Collection existence check

# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.

text = """

Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

"""

# Poor Chunking Strategy

def poor_chunking(text, chunk_size=40):

chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]

print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced

return chunks

# Good Chunking Strategy

def good_chunking(text):

import re

sentences = re.split(r'(?<=[.!?]) +', text)

print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced

return sentences

# Good Chunking with Metadata

def good_chunking_with_metadata(text):

chunks = good_chunking(text)

metadata_chunks = []

for chunk in chunks:

if "healthcare" in chunk:

metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})

elif "ethical implications" in chunk or "data privacy" in chunk:

metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})

else:

metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})

print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced

return metadata_chunks

# Store chunks in Qdrant

def store_chunks(chunks, collection_name):

if len(chunks) == 0:

print(f"No chunks were generated for the collection '{collection_name}'.")

return

# Generate embedding for the first chunk to determine vector size

sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]

sample_embedding = embed_text(sample_text)

vector_size = len(sample_embedding)

create_collection_if_not_exists(collection_name, vector_size)

for idx, chunk in enumerate(chunks):

text = chunk if isinstance(chunk, str) else chunk["text"]

embedding = embed_text(text)

payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload

client.upsert(collection_name=collection_name, points=[

PointStruct(id=idx, vector=embedding, payload=payload)

])

print(f"Chunks successfully stored in the collection '{collection_name}'.")

# Execute chunking and storing separately for each strategy

print("Starting poor_chunking...")

store_chunks(poor_chunking(text), "poor_chunking")

print("Starting good_chunking...")

store_chunks(good_chunking(text), "good_chunking")

print("Starting good_chunking_with_metadata...")

store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")

上面的代碼執(zhí)行以下操作:

· 定義查詢并生成查詢的嵌入

· 搜索查詢設(shè)置為"ethical implications of AI in healthcare"。

· 該retrieve_and_print函數(shù)搜索特定的 Qdrant 集合并檢索最接近查詢嵌入的前 3 個向量。

現(xiàn)在讓我們看看輸出:

python retrieval_test.py

Results from 'poor_chunking' collection for the query: 'ethical implications of AI in healthcare':

Result 1:

Text: . The ethical implications of AI in heal

Source: N/A

Topic: N/A

Result 2:

Text: ant impact is healthcare. AI is being us

Source: N/A

Topic: N/A

Result 3:

Text:

Artificial intelligence is transforming

Source: N/A

Topic: N/A

Results from 'good_chunking' collection for the query: 'ethical implications of AI in healthcare':

Result 1:

Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.

Source: N/A

Topic: N/A

Result 2:

Text: One of the key areas where AI is making a significant impact is healthcare.

Source: N/A

Topic: N/A

Result 3:

Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

Source: N/A

Topic: N/A

Results from 'good_chunking_with_metadata' collection for the query: 'ethical implications of AI in healthcare':

Result 1:

Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.

Source: Healthcare Section

Topic: AI in Healthcare

Result 2:

Text: One of the key areas where AI is making a significant impact is healthcare.

Source: Healthcare Section

Topic: AI in Healthcare

Result 3:

Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.

Source: General

Topic: AI Overview

同一搜索查詢的輸出根據(jù)實施的分塊策略而有所不同。

· 分塊策略不佳:您可以注意到,這里的結(jié)果不太相關(guān),這是因為文本被分成了任意的小塊。

· 良好的分塊策略:這里的結(jié)果更相關(guān),因為文本被分成句子,保留了語義含義。

· 使用元數(shù)據(jù)進行良好的分塊策略:這里的結(jié)果最準(zhǔn)確,因為文本經(jīng)過深思熟慮地分塊并使用元數(shù)據(jù)進行增強。

從實驗中得出的推論

· 分塊需要精心制定策略,并且塊大小不宜太小或太大。

· 分塊不當(dāng)?shù)囊粋€例子是,塊太小,在非自然的地方切斷句子,或者塊太大,同一個塊中包含多個主題,這使得檢索非?;靵y。

· 分塊的整個想法都圍繞著為 LLM 提供更好的背景的概念。

· 元數(shù)據(jù)通過提供額外的上下文層極大地增強了結(jié)構(gòu)正確的分塊。例如,我們已將來源和主題作為元數(shù)據(jù)元素添加到我們的分塊中。

· 檢索系統(tǒng)受益于這些附加信息。例如,如果元數(shù)據(jù)表明某個區(qū)塊屬于“醫(yī)療保健部分”,則系統(tǒng)可以在進行與醫(yī)療保健相關(guān)的查詢時優(yōu)先考慮這些區(qū)塊。

· 通過改進分塊,結(jié)果可以結(jié)構(gòu)化和分類。如果查詢與同一文本中的多個上下文匹配,我們可以通過查看塊的元數(shù)據(jù)來確定信息屬于哪個上下文或部分。

牢記這些策略,并在基于 LLM 的搜索應(yīng)用程序中分塊取得成功。

本站聲明: 本文章由作者或相關(guān)機構(gòu)授權(quán)發(fā)布,目的在于傳遞更多信息,并不代表本站贊同其觀點,本站亦不保證或承諾內(nèi)容真實性等。需要轉(zhuǎn)載請聯(lián)系該專欄作者,如若文章內(nèi)容侵犯您的權(quán)益,請及時聯(lián)系本站刪除。
換一批
延伸閱讀

LED驅(qū)動電源的輸入包括高壓工頻交流(即市電)、低壓直流、高壓直流、低壓高頻交流(如電子變壓器的輸出)等。

關(guān)鍵字: 驅(qū)動電源

在工業(yè)自動化蓬勃發(fā)展的當(dāng)下,工業(yè)電機作為核心動力設(shè)備,其驅(qū)動電源的性能直接關(guān)系到整個系統(tǒng)的穩(wěn)定性和可靠性。其中,反電動勢抑制與過流保護是驅(qū)動電源設(shè)計中至關(guān)重要的兩個環(huán)節(jié),集成化方案的設(shè)計成為提升電機驅(qū)動性能的關(guān)鍵。

關(guān)鍵字: 工業(yè)電機 驅(qū)動電源

LED 驅(qū)動電源作為 LED 照明系統(tǒng)的 “心臟”,其穩(wěn)定性直接決定了整個照明設(shè)備的使用壽命。然而,在實際應(yīng)用中,LED 驅(qū)動電源易損壞的問題卻十分常見,不僅增加了維護成本,還影響了用戶體驗。要解決這一問題,需從設(shè)計、生...

關(guān)鍵字: 驅(qū)動電源 照明系統(tǒng) 散熱

根據(jù)LED驅(qū)動電源的公式,電感內(nèi)電流波動大小和電感值成反比,輸出紋波和輸出電容值成反比。所以加大電感值和輸出電容值可以減小紋波。

關(guān)鍵字: LED 設(shè)計 驅(qū)動電源

電動汽車(EV)作為新能源汽車的重要代表,正逐漸成為全球汽車產(chǎn)業(yè)的重要發(fā)展方向。電動汽車的核心技術(shù)之一是電機驅(qū)動控制系統(tǒng),而絕緣柵雙極型晶體管(IGBT)作為電機驅(qū)動系統(tǒng)中的關(guān)鍵元件,其性能直接影響到電動汽車的動力性能和...

關(guān)鍵字: 電動汽車 新能源 驅(qū)動電源

在現(xiàn)代城市建設(shè)中,街道及停車場照明作為基礎(chǔ)設(shè)施的重要組成部分,其質(zhì)量和效率直接關(guān)系到城市的公共安全、居民生活質(zhì)量和能源利用效率。隨著科技的進步,高亮度白光發(fā)光二極管(LED)因其獨特的優(yōu)勢逐漸取代傳統(tǒng)光源,成為大功率區(qū)域...

關(guān)鍵字: 發(fā)光二極管 驅(qū)動電源 LED

LED通用照明設(shè)計工程師會遇到許多挑戰(zhàn),如功率密度、功率因數(shù)校正(PFC)、空間受限和可靠性等。

關(guān)鍵字: LED 驅(qū)動電源 功率因數(shù)校正

在LED照明技術(shù)日益普及的今天,LED驅(qū)動電源的電磁干擾(EMI)問題成為了一個不可忽視的挑戰(zhàn)。電磁干擾不僅會影響LED燈具的正常工作,還可能對周圍電子設(shè)備造成不利影響,甚至引發(fā)系統(tǒng)故障。因此,采取有效的硬件措施來解決L...

關(guān)鍵字: LED照明技術(shù) 電磁干擾 驅(qū)動電源

開關(guān)電源具有效率高的特性,而且開關(guān)電源的變壓器體積比串聯(lián)穩(wěn)壓型電源的要小得多,電源電路比較整潔,整機重量也有所下降,所以,現(xiàn)在的LED驅(qū)動電源

關(guān)鍵字: LED 驅(qū)動電源 開關(guān)電源

LED驅(qū)動電源是把電源供應(yīng)轉(zhuǎn)換為特定的電壓電流以驅(qū)動LED發(fā)光的電壓轉(zhuǎn)換器,通常情況下:LED驅(qū)動電源的輸入包括高壓工頻交流(即市電)、低壓直流、高壓直流、低壓高頻交流(如電子變壓器的輸出)等。

關(guān)鍵字: LED 隧道燈 驅(qū)動電源
關(guān)閉