许多小型公司免费提供功能强大的 API,或者提供免费试用期,根据您的使用情况,试用期可能长达一年。我们将研究其中一些 API 并探索它们的好处和用途。
Voyage 是一个由领先的人工智能研究人员和工程师组成的团队,致力于构建嵌入模型以实现更好的检索和 RAG。
与 OpenAI 嵌入模型一样好
价格:目前免费(2024 年 2 月)
文档:https://docs.voyageai.com/
开始使用:https://docs.voyageai.com/
支持的嵌入模型,以及更多即将推出的模型。
<iframe src="https://medium.com/media/f8464a95617451325678308e64d14308"frameborder=0></iframe>安装voyage库:
# Use pip to insatll the 'voyageai' Python package to the latest version.
pip install voyageai
让我们使用嵌入模型 voyage-2 之一并查看其输出:
# Import the 'voyageai' module
import voyageai
# Create a 'Client' object from the 'voyageai' module and initialize it with your API key
vo = voyageai . Client ( api_key = "<your secret voyage api key>" )
# user query
user_query = "when apple is releasing their new Iphone?"
# The 'model' parameter is set to "voyage-2", and the 'input_type' parameter is set to "document"
documents_embeddings = vo . embed (
[ user_query ], model = "voyage-2" , input_type = "document"
). embeddings
# printing the embedding
print ( documents_embeddings )
########### OUTPUT ###########
[ 0.12 , 0.412 , 0.573 , ... 0.861 ] # dimension is 1024
########### OUTPUT ###########
Ray 背后的公司 Anyscale 为 LLM 开发人员发布了 API,以便快速、经济高效地大规模运行和微调开源 LLM。
以极低的成本或免费运行/微调强大的开源 LLM
价格(无信用卡):免费套餐 10 美元,其中每百万/代币 0.15 美元
文档:https://docs.endpoints.anyscale.com/
开始使用:https://app.endpoints.anyscale.com/welcome
支持的 LLM 和嵌入模型
<iframe src="https://medium.com/media/d063ecf567aa49f3bab642c0704e6d6e"frameborder=0></iframe>Anyscale 端点可与 OpenAI 库配合使用:
# Use pip to insatll the 'openai' Python package to the latest version.
pip install openai
让我们使用文本生成 LLM 之一并查看其输出:
# Import necessary modules
import openai
# Define the Anyscale endpoint token
ANYSCALE_ENDPOINT_TOKEN = "<your secret anyscale api key>"
# Create an OpenAI client with the Anyscale base URL and API key
oai_client = openai . OpenAI (
base_url = "https://api.endpoints.anyscale.com/v1" ,
api_key = anyscale_key ,
)
# Define the OpenAI model to be used for chat completions
model = "mistralai/Mistral-7B-Instruct-v0.1"
# Define a prompt for the chat completion
prompt = '''hello, how are you?
'''
# Use the AnyScale model for chat completions
# Send a user message using the defined prompt
response = oai_client . chat . completions . create (
model = model ,
messages = [
{ "role" : "user" , "content" : prompt }
],
)
# printing the response
print ( response . choices [ 0 ]. message . content )
########### OUTPUT ###########
Hello ! I am just a computer program , so I dont have
feelings or emotions like a human does ...
########### OUTPUT ###########
这个你可能已经知道了,但值得一提的是,谷歌去年发布了他们的 Gemini Multi-Model,其免费层 API 的使用使它更有趣。
使用文本和图像聊天(类似于 GPT-4)和嵌入模型
价格:免费版(每分钟60次查询)
文档:https://ai.google.dev/docs
开始使用:https://makersuite.google.com/app/apikey
支持型号
<iframe src="https://medium.com/media/b1f73ec8466b9931984f97394495355c"frameborder=0></iframe>安装所需的库
# Install necessary libraries
pip install google - generativeai grpcio grpcio - tools
使用文本模型gemini-pro
# importing google.generativeai as genai
import google . generativeai as genai
# setting the api key
genai . configure ( api_key = "<your secret gemini api key>" )
# setting the text model
model = genai . GenerativeModel ( 'gemini-pro' )
# generating response
response = model . generate_content ( "What is the meaning of life?" )
# printing the response
print ( response . text )
########### OUTPUT ###########
he query of life purpose has perplexed people
across centuries ...
########### OUTPUT ###########
使用图像模型gemini-pro-vision
# importing google.generativeai as genai
import google . generativeai as genai
# setting the api key
genai . configure ( api_key = "<your secret gemini api key>" )
# setting the text model
model = genai . GenerativeModel ( 'gemini-pro-vision' )
# loading Image
import PIL . Image
img = PIL . Image . open ( 'cat_wearing_hat.jpg' )
# chating with image
response = model . generate_content ([ img , "Is there a cat in this image?" ])
# printing the response
print ( response . text )
########### OUTPUT ###########
Yes there is a cat in this image
########### OUTPUT ###########
图像深度估计是指计算出图像中物体的距离。这是计算机视觉中的一个重要问题,因为它有助于自动驾驶汽车等任务。 Lihe Young 的 Hugging Face 空间提供了一个 API,通过它您可以找到图像深度。
在几秒钟内找到图像深度,无需存储或加载模型
价格:免费(需要 HuggingFace 代币)
获取 HuggingFace 令牌:https://huggingface.co/settings/tokens
网络演示:https://huggingface.co/spaces/LiheYoung/Depth-Anything
支持的型号:
安装所需的库
# Install necessary libraries
pip install gradio_client
Finding image depth using depth - anything model .
from gradio_client import Client
# Your Hugging Face API token
huggingface_token = "YOUR_HUGGINGFACE_TOKEN"
# Create a Client instance with the URL of the Hugging Face model deployment
client = Client ( "https://liheyoung-depth-anything.hf.space/--replicas/odat1/" )
# Set the headers parameter with your Hugging Face API token
headers = { "Authorization" : f"Bearer { huggingface_token } " }
# image link or path
my_image = "house.jpg"
# Use the Client to make a prediction, passing the headers parameter
result = client . predict (
my_image ,
api_name = "/on_submit" ,
headers = headers # Pass the headers with the Hugging Face API token
)
# loading the result
from IPython . display import Image
image_path = result [ 0 ][ 1 ]
Image ( filename = image_path )
您可以使用HuggingFace M4提供的API创建网页模板。
只需将网页截图并传递到 API 中即可。
价格:免费(需要 HuggingFace 代币)
获取 HuggingFace 令牌:https://huggingface.co/settings/tokens
网络演示:https://huggingface ... Screenshot2html
安装所需的库
# Install necessary libraries
pip install gradio_client
使用屏幕截图到代码模型将网站屏幕截图转换为代码。
# Installing required library
from gradio_client import Client
# Your Hugging Face API token
huggingface_token = "YOUR_HUGGINGFACE_TOKEN"
# Create a Client instance with the URL of the Hugging Face model deployment
client = Client ( "https://huggingfacem4-screenshot2html.hf.space/--replicas/cpol9/" )
# Set the headers parameter with your Hugging Face API token
headers = { "Authorization" : f"Bearer { huggingface_token } " }
# website image link or path
my_image = "mywebpage_screenshot.jpg"
# Use the Client to generate code, passing the headers parameter
result = client . predict (
my_image ,
api_name = "/model_inference" ,
headers = headers # Pass the headers with the Hugging Face API token
)
# printing the output
printing ( result )
########### OUTPUT ###########
< html >
< style >
body {
...
< / body >
< / html >
########### OUTPUT ###########
使用 Whisper API 将音频转换为文本。
只需使用 API 将音频转换为文本,无需加载耳语模型。
价格:免费(需要 HuggingFace 代币)
获取 HuggingFace 令牌:https://huggingface.co/settings/tokens
网络演示:https://hugging … 耳语
安装所需的库
# Install necessary libraries
pip install gradio_client
使用 Whisper 模型将音频转换为文本。
# Installing required library
from gradio_client import Client
# Your Hugging Face API token
huggingface_token = "YOUR_HUGGINGFACE_TOKEN"
# Create a Client instance with the URL of the Hugging Face model deployment
client = Client ( "https://huggingfacem4-screenshot2html.hf.space/--replicas/cpol9/" )
# Set the headers parameter with your Hugging Face API token
headers = { "Authorization" : f"Bearer { huggingface_token } " }
# audio link or path
my_image = "myaudio.mp4"
# Use the Client to generate a response, passing the headers parameter
result = client . predict (
my_audio ,
"transcribe" , # str in 'Task' Radio component
api_name = "/predict"
headers = headers # Pass the headers with the Hugging Face API token
)
# printing the output
printing ( result )
########### OUTPUT ###########
Hi , how are you ?
########### OUTPUT ###########
您可以通过 Hugging Face Spaces 探索更多 API。许多中小企业以非常低的成本提供强大的生成式人工智能工具,例如 OpenAI 嵌入,1K/代币的成本为 0.00013 美元。请务必检查其许可证,因为免费层中的许多免费 API 要么限制每日请求,要么用于非商业用途。