許多小型公司免費提供功能強大的 API,或提供免費試用期,根據您的使用情況,試用期可能長達一年。我們將研究其中一些 API 並探索它們的好處和用途。
Voyage 是一個由領先的人工智慧研究人員和工程師組成的團隊,致力於建立嵌入模型以實現更好的檢索和 RAG。
與 OpenAI 嵌入模型一樣好
價格:目前免費(2024 年 2 月)
文件:https://docs.voyageai.com/
開始使用:https://docs.voyageai.com/
支援的嵌入模型,以及更多即將推出的模型。
<iframe src="https://medium.com/media/f8464a95617451325678308e64d14308"frameborder=0></iframe>安裝voyage庫:
# Use pip to insatll the 'voyageai' Python package to the latest version.
pip install voyageai
讓我們使用嵌入模型 voyage-2 之一併查看其輸出:
# Import the 'voyageai' module
import voyageai
# Create a 'Client' object from the 'voyageai' module and initialize it with your API key
vo = voyageai . Client ( api_key = "<your secret voyage api key>" )
# user query
user_query = "when apple is releasing their new Iphone?"
# The 'model' parameter is set to "voyage-2", and the 'input_type' parameter is set to "document"
documents_embeddings = vo . embed (
[ user_query ], model = "voyage-2" , input_type = "document"
). embeddings
# printing the embedding
print ( documents_embeddings )
########### OUTPUT ###########
[ 0.12 , 0.412 , 0.573 , ... 0.861 ] # dimension is 1024
########### OUTPUT ###########
Ray 背後的公司 Anyscale 為 LLM 開發人員發布了 API,以便快速、經濟高效地大規模運行和微調開源 LLM。
以極低的成本或免費運行/微調強大的開源 LLM
價格(無信用卡):免費套餐 10 美元,其中每百萬/代幣 0.15 美元
文件:https://docs.endpoints.anyscale.com/
開始使用:https://app.endpoints.anyscale.com/welcome
支援的 LLM 和嵌入模型
<iframe src="https://medium.com/media/d063ecf567aa49f3bab642c0704e6d6e"frameborder=0></iframe>Anyscale 端點可與 OpenAI 函式庫搭配使用:
# Use pip to insatll the 'openai' Python package to the latest version.
pip install openai
讓我們使用文字生成 LLM 之一併查看其輸出:
# Import necessary modules
import openai
# Define the Anyscale endpoint token
ANYSCALE_ENDPOINT_TOKEN = "<your secret anyscale api key>"
# Create an OpenAI client with the Anyscale base URL and API key
oai_client = openai . OpenAI (
base_url = "https://api.endpoints.anyscale.com/v1" ,
api_key = anyscale_key ,
)
# Define the OpenAI model to be used for chat completions
model = "mistralai/Mistral-7B-Instruct-v0.1"
# Define a prompt for the chat completion
prompt = '''hello, how are you?
'''
# Use the AnyScale model for chat completions
# Send a user message using the defined prompt
response = oai_client . chat . completions . create (
model = model ,
messages = [
{ "role" : "user" , "content" : prompt }
],
)
# printing the response
print ( response . choices [ 0 ]. message . content )
########### OUTPUT ###########
Hello ! I am just a computer program , so I dont have
feelings or emotions like a human does ...
########### OUTPUT ###########
這個你可能已經知道了,但值得一提的是,Google去年發布了他們的 Gemini Multi-Model,其免費層 API 的使用使它更有趣。
使用文字和圖像聊天(類似於 GPT-4)和嵌入模型
價格:免費版(每分鐘60次查詢)
文件:https://ai.google.dev/docs
開始使用:https://makersuite.google.com/app/apikey
支援型號
<iframe src="https://medium.com/media/b1f73ec8466b9931984f97394495355c"frameborder=0></iframe>安裝所需的庫
# Install necessary libraries
pip install google - generativeai grpcio grpcio - tools
使用文字模型gemini-pro
# importing google.generativeai as genai
import google . generativeai as genai
# setting the api key
genai . configure ( api_key = "<your secret gemini api key>" )
# setting the text model
model = genai . GenerativeModel ( 'gemini-pro' )
# generating response
response = model . generate_content ( "What is the meaning of life?" )
# printing the response
print ( response . text )
########### OUTPUT ###########
he query of life purpose has perplexed people
across centuries ...
########### OUTPUT ###########
使用影像模型gemini-pro-vision
# importing google.generativeai as genai
import google . generativeai as genai
# setting the api key
genai . configure ( api_key = "<your secret gemini api key>" )
# setting the text model
model = genai . GenerativeModel ( 'gemini-pro-vision' )
# loading Image
import PIL . Image
img = PIL . Image . open ( 'cat_wearing_hat.jpg' )
# chating with image
response = model . generate_content ([ img , "Is there a cat in this image?" ])
# printing the response
print ( response . text )
########### OUTPUT ###########
Yes there is a cat in this image
########### OUTPUT ###########
影像深度估計是指計算出影像中物體的距離。這是電腦視覺中的一個重要問題,因為它有助於自動駕駛汽車等任務。 Lihe Young 的 Hugging Face 空間提供了一個 API,透過它您可以找到圖像深度。
在幾秒鐘內找到圖像深度,無需儲存或載入模型
價格:免費(需要 HuggingFace 代幣)
取得 HuggingFace 令牌:https://huggingface.co/settings/tokens
網路示範:https://huggingface.co/spaces/LiheYoung/Depth-Anything
支援的型號:
安裝所需的庫
# Install necessary libraries
pip install gradio_client
Finding image depth using depth - anything model .
from gradio_client import Client
# Your Hugging Face API token
huggingface_token = "YOUR_HUGGINGFACE_TOKEN"
# Create a Client instance with the URL of the Hugging Face model deployment
client = Client ( "https://liheyoung-depth-anything.hf.space/--replicas/odat1/" )
# Set the headers parameter with your Hugging Face API token
headers = { "Authorization" : f"Bearer { huggingface_token } " }
# image link or path
my_image = "house.jpg"
# Use the Client to make a prediction, passing the headers parameter
result = client . predict (
my_image ,
api_name = "/on_submit" ,
headers = headers # Pass the headers with the Hugging Face API token
)
# loading the result
from IPython . display import Image
image_path = result [ 0 ][ 1 ]
Image ( filename = image_path )
您可以使用HuggingFace M4提供的API建立網頁範本。
只需將網頁截圖並傳遞到 API 中即可。
價格:免費(需要 HuggingFace 代幣)
取得 HuggingFace 令牌:https://huggingface.co/settings/tokens
網路示範:https://huggingface ... Screenshot2html
安裝所需的庫
# Install necessary libraries
pip install gradio_client
使用螢幕截圖到程式碼模型將網站螢幕截圖轉換為程式碼。
# Installing required library
from gradio_client import Client
# Your Hugging Face API token
huggingface_token = "YOUR_HUGGINGFACE_TOKEN"
# Create a Client instance with the URL of the Hugging Face model deployment
client = Client ( "https://huggingfacem4-screenshot2html.hf.space/--replicas/cpol9/" )
# Set the headers parameter with your Hugging Face API token
headers = { "Authorization" : f"Bearer { huggingface_token } " }
# website image link or path
my_image = "mywebpage_screenshot.jpg"
# Use the Client to generate code, passing the headers parameter
result = client . predict (
my_image ,
api_name = "/model_inference" ,
headers = headers # Pass the headers with the Hugging Face API token
)
# printing the output
printing ( result )
########### OUTPUT ###########
< html >
< style >
body {
...
< / body >
< / html >
########### OUTPUT ###########
使用 Whisper API 將音訊轉換為文字。
只需使用 API 將音訊轉換為文本,無需載入耳語模型。
價格:免費(需要 HuggingFace 代幣)
取得 HuggingFace 令牌:https://huggingface.co/settings/tokens
網路示範:https://hugging … 耳語
安裝所需的庫
# Install necessary libraries
pip install gradio_client
使用 Whisper 模型將音訊轉換為文字。
# Installing required library
from gradio_client import Client
# Your Hugging Face API token
huggingface_token = "YOUR_HUGGINGFACE_TOKEN"
# Create a Client instance with the URL of the Hugging Face model deployment
client = Client ( "https://huggingfacem4-screenshot2html.hf.space/--replicas/cpol9/" )
# Set the headers parameter with your Hugging Face API token
headers = { "Authorization" : f"Bearer { huggingface_token } " }
# audio link or path
my_image = "myaudio.mp4"
# Use the Client to generate a response, passing the headers parameter
result = client . predict (
my_audio ,
"transcribe" , # str in 'Task' Radio component
api_name = "/predict"
headers = headers # Pass the headers with the Hugging Face API token
)
# printing the output
printing ( result )
########### OUTPUT ###########
Hi , how are you ?
########### OUTPUT ###########
您可以透過 Hugging Face Spaces 探索更多 API。許多中小企業以非常低的成本提供強大的生成式人工智慧工具,例如 OpenAI 嵌入,1K/代幣的成本為 0.00013 美元。請務必檢查其許可證,因為免費層中的許多免費 API 要么限制每日請求,要么用於非商業用途。