Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:
Learn more about Ray AI Libraries:
Or more about Ray Core and its key abstractions:
Learn more about Monitoring and Debugging:
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.
Install Ray with: pip install ray
. For nightly wheels, see the
Installation page.
Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.
Ray is a unified way to scale Python and AI applications from a laptop to a cluster.
With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.
Older documents:
Platform | Purpose | Estimated Response Time | Support Level |
---|---|---|---|
Discourse Forum | For discussions about development and questions about usage. | < 1 day | Community |
GitHub Issues | For reporting bugs and filing feature requests. | < 2 days | Ray OSS Team |
Slack | For collaborating with other Ray users. | < 2 days | Community |
StackOverflow | For asking questions about how to use Ray. | 3-5 days | Community |
Meetup Group | For learning about Ray projects and best practices. | Monthly | Ray DevRel |
For staying up-to-date on new features. | Daily | Ray DevRel |