Download Tomato Novels and implement it through Python. Please do not abuse it, use it and cherish it.
1.c.exe is used to detect changes in the structure of Tomato novel web pages
2.s.exe is used for novel content search and can be used with Tomato novel downloader
3.f.exe splits novel files based on file size and can be used with Tomato novel downloader
Enter the full link to the novel catalog page or download ID
Enter id or link to download directly
Enter 1 to update, read the id in record.json
to update
Enter 2 to search
Enter 3 for batch download
Enter 4 to enter settings. You can adjust the placeholder at the beginning of the text paragraph, adjust the delay, novel storage location, and save mode.
Enter 5 to back up the downloaded novel, as well as the download format, space at the beginning of the paragraph, etc.
Enter 6 to exit the program
config.json
fileSystem | Operation |
---|---|
Windows 7 | Runnable |
windows 10 | Runnable |
Windows 11 | Runnable |
macOS 10.1 | Runnable |
macOS 10.2 | Runnable |
macOS 10.3 | Runnable |
macOS 10.4 | Runnable |
mac OS X 10.5 | Runnable |
mac OS X 10.6 | Runnable |
mac OS X 10.7 | Runnable |
mac OS X 10.8 | Runnable |
mac OS X 10.9 | Runnable |
Kali Linux 2024.3 | Runnable |
Error: The above exception was the direct cause of the following exception: urllib3.exceptions.ProxyError: ('Unable to connect to proxy', FileNotFoundError(2, 'No such file or directory')) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "requestsadapters. py", line 667, in send File "urllib3connectionpool. py", line 843, in urlopen File "urllib3utilretry. py", line 519, in increment urllib3. exceptions. MaxRetryError: HTTPSConnectionPool(host='fanqienovel. com', port=443): Max retries exceeded with url: /page/7143038691944959011 (Caused by ProxyError('Unable to connect to proxy', FileNotFoundError(2, 'No such file or dire ctory')))
......
Network error, please check the network connection (such as turning off the proxy, acceleration)
Functions implemented by the web version
After the web server is downloaded, you can directly download the novel file to your local computer, so it can be run remotely in a container or virtual machine.
There is a progress bar, beautiful!
You can download novels by ID, search for novels by name, and update previously downloaded novels.
Simple UI interface.
Queue design, you can add several books to the queue and download them in batches.
(And the original code has been refactored. I can’t say whether it has changed for the better or for the worse. The main reason is that it is not convenient to convert the previous code into a web version.)
The web version currently does not have an exe file. You have two ways to run the web version.
Python run
Clone this project using Git or directly download the zip of the project and unzip it. Enter the project folder, create a new virtual environment, and use pip install -r requirements.txt
to install the python dependencies of this project.
Then enter the src
directory, run server.py
with python, and follow the instructions to open http://localhost:12930
with a browser. (Note: When downloading the project zip
or git
for python
version 3.8 and below, delete the original main.py
in the src
directory, and then change the name of main2.py
to main.py
)
Docker run
Clone this project using Git or directly download the zip of the project and unzip it. Go into the project folder.
Directly use docker compose up
(or docker compose up -d
to run in the background) to build and start the image. After starting, use the browser to access http://localhost:12930
.
The downloaded novels and personal data ( data
folder) will be stored in the docker volume, called fanqie_data
and fanqie_downloads
respectively. If you want to modify it to a specific directory, you can modify the persistent user data part in the docker-compose.yaml
file.
If you have any comments or errors in the program, please feel free to discuss them in lssues
This program is intended for educational and research purposes related to Python web crawling and web page processing techniques. It should not be used for any illegal activity or infringement of the rights of others. The user is responsible for any legal liability and risks arising from the use of this program, and the author and project contributors are not responsible for any loss or damage caused by the use of the program
Before using this program, please be sure to comply with relevant laws and regulations, as well as the website's usage policies, and consult legal counsel if you have any questions or concerns
This program is designed for educational and research purposes related to Python web crawlers and web page processing technologies. It should not be used for any illegal activities or acts that violate the rights of others. Users are responsible for any legal liabilities and risks arising from the use of this program. The author and project contributors are not responsible for any losses or damages resulting from the use of the program.
Before using this program, please ensure compliance with relevant laws and regulations and the website's usage policies. Consult a legal advisor if you have any questions or concerns.
This program complies with AGPL-3.0 open source. When using the source code of this program, please indicate the source and use this agreement as well.
Author: Yck (ying-ck) & Yqy(qxqycb) & Lingo(lingo34)