Welcome to ZySec AI, where artificial intelligence meets cybersecurity. Project ZySec, powered by the innovative ZySec 7B model, is reshaping the cybersecurity landscape with AI-driven solutions.
ZySec is mission to make AI accesible for Security proffesionals like you!
ZySec AI leads the charge in integrating Cyber Security with Artificial Intelligence. Our vision is to transform how security professionals leverage technology. ZySec AI is more than just a tool; it is a holistic approach to enhancing security operations, merging AI's innovative power with the unique challenges of cybersecurity, while prioritizing privacy.
Note: ZySec AI is designed to operate without internet connectivity, ensuring complete privacy. The only exception is the optional internet research feature.
ZySec 7B, the cornerstone of ZySec AI, is built on HuggingFace's Zephyr language model series. Custom-designed for cybersecurity, it offers an expert level of knowledge and insights. The model is extensively trained across more than 30 unique domains, ensuring its effectiveness and reliability in the cybersecurity field.
You have the flexibility to run the ZySec AI application either locally on your computer or remotely on a GPU instance, depending on your preferences and resource availability.
Local Deployment: Suitable for development, testing, or light usage. Follow the instructions in the previous sections to set up and run the application on your local machine.
Remote Deployment on a GPU Instance: For better performance, especially when handling larger workloads or requiring faster processing, consider deploying on a GPU instance. Use the VLLM (Very Large Language Model) deployment mode for optimal performance in a GPU environment.
Here is the model that can be deployed on a GPU instance for enhanced performance: ZySec-7B-v1 on Hugging Face. This model is specifically optimized for GPU-based deployments and offers significant performance improvements over CPU-based setups.
Clone the Repository: Start by cloning the ZySec AI repository from GitHub to your local machine.
Clone the project
git clone https://github.com/ZySec-AI/ZySec.git
Starting the Application Server: Modify the config.cfg file as per per your requirement. By default the the script will download the model and run a local instance using llama-cpp-python[server]:
chmod +x start.sh
./start.sh
You can run locally on the same computer or remotely on GPU instance depending on your preferences. For better performance use VLLM deployment mode in GPU instance.
ZySec AI is released under the Apache License, Version 2.0 (Apache-2.0), a permissive open-source license. This license allows you to freely use, modify, distribute, and sell your own versions of this work, under the terms of the license.
? View the Apache License, Version 2.0
Special thanks to the HuggingFace and LangChain communities for their inspiration and contributions to the field of AI. Their pioneering work continues to inspire projects like ZySec AI.
Venkatesh Siddi is a notable expert in cybersecurity, integrating Artificial Intelligence and Machine Learning into complex security challenges. His expertise extends to big data, cloud security, and innovative technology design.