AlphaPulldown fully maintains backward compatibility with input files and scripts from versions 1.x.
AlphaPulldown: Version 2.0.0 (Beta)
Table of Contents
About AlphaPulldown
Overview
Alphafold databases
Snakemake AlphaPulldown
1. Installation
2. Configuration
3. Execution
Run AlphaPulldown Python Command Line Interface
Create Jupyter Notebook
Create Results table
Next step
Next step
2.1. Basic run
2.2. Example run with SLURM (EMBL cluster)
2.3. Pulldown mode
2.4. All versus All mode
2.5. Run with Custom Templates (TrueMultimer)
2.6. Run with crosslinking-data (AlphaLink2)
Input
Script Execution: Structure Prediction
Output
Next step
Input
Script Execution
Output and the next step
Multiple inputs "pulldown" mode
Output and the next step
Input
Script Execution for TrueMultimer Structure Prediction
Output and the next step
Input
Run with AlphaLink2 prediction via AlphaPulldown
Output and the next step
1.1. Basic run
1.2. Example bash scripts for SLURM (EMBL cluster)
1.3. Run using MMseqs2 and ColabFold Databases (Faster)
1.4. Run with custom templates (TrueMultimer)
Input
Script Execution
Output
Next step
Input
Script Execution
Next step
Run MMseqs2 Remotely
Output
Run MMseqs2 Locally
Next step
Input
Script Execution
Output
Next step
0.1. Create Anaconda environment
0.2. Installation using pip
0.3. Installation for the Downstream analysis tools
0.4. Installation for cross-link input data by AlphaLink2 (optional!)
0.5. Installation for developers
0. Installation
1. Compute multiple sequence alignment (MSA) and template features (CPU stage)
2. Predict structures (GPU stage)
3. Analysis and Visualization
Downstream analysis
Decrease the size of AlphaPulldown output
Convert Models from PDB Format to ModelCIF Format
1. Convert all models to separate ModelCIF files
2. Only convert a specific single model for each complex
3. Have a representative model and keep associated models
Associated Zip Archives
Miscellaneous Options
Jupyter notebook
Results table
Results management scripts
Features Database
List available organisms:
Download specific protein features:
Download all features for an organism:
Steps:
Verify installation:
Installation
Configuration
Downloading Features
AlphaPulldown is a customized implementation of AlphaFold-Multimer designed for customizable high-throughput screening of protein-protein interactions. It extends AlphaFold’s capabilities by incorporating additional run options, such as customizable multimeric structural templates (TrueMultimer), MMseqs2 multiple sequence alignment (MSA) via ColabFold databases, protein fragment predictions, and the ability to incorporate mass spec data as an input using AlphaLink2.
AlphaPulldown can be used in two ways: either by a two-step pipeline made of python scripts, or by a Snakemake pipeline as a whole. For details on using the Snakemake pipeline, please refer to the separate GitHub repository.
Figure 1 Overview of AlphaPulldown worflow
The AlphaPulldown workflow involves the following 3 steps:
Create and store MSA and template features:
In this step, AlphaFold searches preinstalled databases using HMMER for each queried protein sequence and calculates multiple sequence alignments (MSAs) for all found homologs. It also searches for homolog structures to use as templates for feature generation. This step only requires CPU.
Customizable options include:
To speed up the search process, MMSeq2 can be used instead of the default HHMER.
Use custom MSA.
Use a custom structural template, including a multimeric one (TrueMultimer mode).
Structure prediction:
In this step, the AlphaFold neural network runs and produces the final protein structure, requiring GPU. A key strength of AlphaPulldown is its ability to flexibly define how proteins are combined for the structure prediction of protein complexes. Here are the three main approaches you can use:
Figure 2 Three typical scenarios covered by AlphaPulldown
Additionally, AlphaPulldown also allows:
Select only region(s) of proteins that you want to predict instead of the full-length sequences.
Adjust MSA depth to control the influence of the initial MSA on the final model.
Integrate high-throughput crosslinking data with AlphaFold modeling via AlphaLink2.
Single file (custom mode or homo-oligomer mode): Create a file where each row lists the protein sequences you want to predict together or each row tells the program to model homo-oligomers with your specified number of copies.
Multiple Files (pulldown mode): Provide several files, each containing protein sequences. AlphaPulldown will automatically generate all possible combinations by pairing rows of protein names from each file.
All versus all: AlphaPulldown will generate all possible non-redundant combinations of proteins in the list.
Downstream analysis of results:
The results for all predicted models can be systematized using one of the following options:
A table containing various scores and physical parameters of protein complex interactions.
A Jupyter notebook with interactive 3D protein models and PAE plots.
For the standard MSA and features calculation, AlphaPulldown requires genetic databases. Check if you have downloaded the necessary parameters and databases (e.g., BFD, MGnify, etc.) as instructed in AlphaFold's documentation. You should have a directory structured as follows:
alphafold_database/ # Total: ~ 2.2 TB (download: 438 GB)
bfd/ # ~ 1.7 TB (download: 271.6 GB)
# 6 files.
mgnify/ # ~ 64 GB (download: 32.9 GB)
mgy_clusters_2018_12.fa
params/ # ~ 3.5 GB (download: 3.5 GB)
# 5 CASP14 models,
# 5 pTM models,
# 5 AlphaFold-Multimer models,
# LICENSE,
# = 16 files.
pdb70/ # ~ 56 GB (download: 19.5 GB)
# 9 files.
pdb_mmcif/ # ~ 206 GB (download: 46 GB)
mmcif_files/
# About 227,000 .cif files.
obsolete.dat
pdb_seqres/ # ~ 0.2 GB (download: 0.2 GB)
pdb_seqres.txt
small_bfd/ # ~ 17 GB (download: 9.6 GB)
bfd-first_non_consensus_sequences.fasta
uniref30/ # ~ 86 GB (download: 24.9 GB)
# 14 files.
uniprot/ # ~ 98.3 GB (download: 49 GB)
uniprot.fasta
uniref90/ # ~ 58 GB (download: 29.7 GB)
uniref90.fasta
Note
Uniclust30 is the version of the database generated before 2019, UniRef30 is the one generated after 2019. Please note that AlphaPulldown is using UniRef30_2023_02 by default. This version can be downloaded by this script. Alternatively, please overwrite the default path to the uniref30 database using the --uniref30_database_path flag of create_individual_features.py.
Note
Since the local installation of all genetic databases is space-consuming, you can alternatively use the remotely-run MMseqs2 and ColabFold databases. Follow the corresponding instructions. However, for AlphaPulldown to function, you must download the parameters stored in the params/
directory of the AlphaFold database.
AlphaPulldown is available as a Snakemake pipeline, allowing you to sequentially execute (1) Generation of MSAs and template features, (2) Structure prediction, and (3) Results analysis without manual intervention between steps. For more details, please refer to the AlphaPulldownSnakemake repository.
Warning
The Snakemake version of AlphaPulldown differs slightly from the conventional scripts-based AlphaPulldown in terms of input file specifications.
Before installation, make sure your python version is at least 3.10.
python3 --version
Install Dependencies
pip install snakemake==7.32.4 snakedeploy==0.10.0 pulp==2.7 click==8.1 cookiecutter==2.6
Snakemake Cluster Setup
In order to allow snakemake to interface with a compute cluster, we are going to use the Snakemake-Profile for SLURM. If you are not working on a SLURM cluster you can find profiles for different architectures here. The following will create a profile that can be used with snakemake and prompt you for some additional information.
git clone https://github.com/Snakemake-Profiles/slurm.git profile_dir="${HOME}/.config/snakemake"mkdir -p "$profile_dir"template="gh:Snakemake-Profiles/slurm"cookiecutter --output-dir "$profile_dir" "$template"
During the setup process, you will be prompted to answer several configuration questions. Below are the questions and the recommended responses:
profile_name [slurm]:
slurm_noSidecar
Select use_singularity:
1 (False)
Select use_conda:
1 (False)
jobs [500]:
(Press Enter to accept default)
restart_times [0]:
(Press Enter to accept default)
max_status_checks_per_second [10]:
(Press Enter to accept default)
max_jobs_per_second [10]:
(Press Enter to accept default)
latency_wait [5]:
30
Select print_shell_commands:
1 (False)
sbatch_defaults []:
qos=low nodes=1
Select cluster_sidecar:
2 (no)
cluster_name []:
(Press Enter to leave blank)
cluster_jobname [%r_%w]:
(Press Enter to accept default)
cluster_logpath [logs/slurm/%r/%j]:
(Press Enter to accept default)
cluster_config []:
(Press Enter to leave blank)
After responding to these prompts, your Slurm profile named slurm_noSidecar for Snakemake will be configured as specified.
Singularity (Probably Installed Already): This pipeline makes use of containers for reproducibility. If you are working on the EMBL cluster singularity is already installed and you can skip this step. Otherwise, please install Singularity using the official Singularity guide.
Download The Pipeline: This will download the version specified by '--tag' of the snakemake pipeline and create the repository AlphaPulldownSnakemake or any other name you choose.
snakedeploy deploy-workflow https://github.com/KosinskiLab/AlphaPulldownSnakemake AlphaPulldownSnakemake --tag 1.4.0cd AlphaPulldownSnakemake
Note
If you want to use the latest version from GitHub replace --tag X.X.X
to --branch main
Install CCP4 package: To install the software needed for the analysis step, please follow these instructions:
Download so-called Singularity image with our analysis software package
singularity pull docker://kosinskilab/fold_analysis:latest singularity build --sandboxfold_analysis_latest.sif
Download CCP4 from https://www.ccp4.ac.uk/download/#os=linux and copy to your server
tar xvzf ccp4-9.0.003-linux64.tar.gzcd ccp4-9 cp bin/pisa bin/sc/software/ cp /lib/* /software/lib64/
Create a new Singularity with CCP4 included
cdsingularity build fold_analysis_latest_withCCP4.sif
It should create fold_analysis_latest_withCCP4.sif
file.
You can delete the
Adjust config/config.yaml
for your particular use case.
If you want to use CCP4 for analysis, open config/config.yaml
in a text editor and change the path to the analysis container to:
analysis_container : "/path/to/fold_analysis_latest_withCCP4.sif"
input_filesThis variable holds the path to your sample sheet, where each line corresponds to a folding job. For this pipeline we use the following format specification:
protein:N:start-stop[_protein:N:start-stop]*
where protein is a path to a file with '.fasta' extension or uniprot ID, N is the number of monomers for this particular protein and start and stop are the residues that should be predicted. However, only protein is required, N, start and stop can be omitted. Hence the following folding jobs for the protein example containing residues 1-50 are equivalent:
example:2 example_example example:2:1-50 example:1-50_example:1-50 example:1:1-50_example:1:1-50
This format similarly extends for the folding of heteromers:
example1_example2
Assuming you have two sample sheets config/sample_sheet1.csv and config/sample_sheet2.csv. The following would be equivalent to computing all versus all in sample_sheet1.csv:
input_files : - config/sample_sheet1.csv - config/sample_sheet1.csv
while the snippet below would be equivalent to computing the pulldown between sample_sheet1.csv and sample_sheet2.csv
input_files : - config/sample_sheet1.csv - config/sample_sheet2.csv
This format can be extended to as many files as you would like, but keep in mind the number of folds will increase dramatically.
input_files : - config/sample_sheet1.csv - config/sample_sheet2.csv - ...
alphafold_data_directoryThis is the path to your alphafold database.
output_directorySnakemake will write the pipeline output to this directory. If it does not exist, it will be created.
save_msa, use_precomputed_msa, predictions_per_model, number_of_recycles, report_cutoffCommand line arguments that were previously pasesed to AlphaPulldown's run_multimer_jobs.py and create_notebook.py (report_cutoff).
alphafold_inference_threads, alphafold_inferenceSlurm specific parameters that do not need to be modified by non-expert users.
only_generate_featuresIf set to True, stops after generating features and does not perform structure prediction and reporting.
After following the Installation and Configuration steps, you are now ready to run the snakemake pipeline. To do so, navigate into the cloned pipeline directory and run:
snakemake --use-singularity --singularity-args "-B /scratch:/scratch -B /g/kosinski:/g/kosinski --nv " --jobs 200 --restart-times 5 --profile slurm_noSidecar --rerun-incomplete --rerun-triggers mtime --latency-wait 30 -n
Here's a breakdown of what each argument does:
--use-singularity
: Enables the use of Singularity containers. This allows for reproducibility and isolation of the pipeline environment.
--singularity-args
: Specifies arguments passed directly to Singularity. In the provided example:
-B /scratch:/scratch
and -B /g/kosinski:/g/kosinski
: These are bind mount points. They make directories from your host system accessible within the Singularity container. --nv
ensures the container can make use of the hosts GPUs.
--profile name_of_your_profile
: Specifies the Snakemake profile to use (e.g., the SLURM profile you set up for cluster execution).
--rerun-triggers mtime
: Reruns a job if a specific file (trigger) has been modified more recently than the job's output. Here, mtime
checks for file modification time.
--jobs 500
: Allows up to 500 jobs to be submitted to the cluster simultaneously.
--restart-times 10
: Specifies that jobs can be automatically restarted up to 10 times if they fail.
--rerun-incomplete
: Forces the rerun of any jobs that were left incomplete in previous Snakemake runs.
--latency-wait 30
: Waits for 30 seconds after a step finishes to check for the existence of expected output files. This can be useful in file-systems with high latencies.
-n
: Dry-run flag. This makes Snakemake display the commands it would run without actually executing them. It's useful for testing. To run the pipeline for real, simply remove this flag.
Executing the command above will perform submit the following jobs to the cluster:
AlphaPulldown can be used as a set of scripts for every particular step.
create_individual_features.py
: Generates multiple sequence alignments (MSA), identifies structural templates, and stores the results in monomeric feature .pkl
files.
run_multimer_jobs.py
: Executes the prediction of structures.
create_notebook.py
and alpha-analysis.sif
: Prepares an interactive Jupyter Notebook and a Results Table, respectively.
Firstly, install Anaconda and create an AlphaPulldown environment, gathering necessary dependencies. We recommend to use mamba to speed up solving of dependencies:
conda create -n AlphaPulldown -c omnia -c bioconda -c conda-forge python==3.11 openmm==8.0 pdbfixer==1.9 kalign2 hhsuite hmmer modelcif
source activate AlphaPulldown
This usually works, but on some compute systems, users may prefer to use other versions or optimized builds of HMMER and HH-suite that are already installed.
Activate the AlphaPulldown environment and install AlphaPulldown:
source activate AlphaPulldown python3 -m pip install alphapulldown
pip install -U "jax[cuda12]"
Note
For older versions of AlphaFold: If you haven't updated your databases according to the requirements of AlphaFold 2.3.0, you can still use AlphaPulldown with your older version of the AlphaFold database. Please follow the installation instructions on the dedicated branch.
Install CCP4 package: To install the software needed for the anaysis step, please follow these instructions:
singularity pull docker://kosinskilab/fold_analysis:latest singularity build --sandboxfold_analysis_latest.sif# Download the top one from https://www.ccp4.ac.uk/download/#os=linuxtar xvzf ccp4-9.0.003-linux64.tar.gzcd ccp4-9 cp bin/pisa bin/sc /software/ cp /lib/* /software/lib64/ singularity build
Make sure you have installed PyTorch corresponding to the CUDA version you have. Here will take CUDA 11.7 and PyTorch 1.13.0 as an example:
pip install torch==1.13.0+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
Compile UniCore.
source activate AlphaPulldown git clone https://github.com/dptech-corp/Uni-Core.gitcd Uni-Core python setup.py install --disable-cuda-ext # test whether unicore is successfully installedpython -c "import unicore"
You may see the following warning, but it's fine:
fused_multi_tensor is not installed corrected
fused_rounding is not installed corrected
fused_layer_norm is not installed corrected
fused_softmax is not installed corrected
Download the PyTorch checkpoints from Zenodo, unzip it, then you should obtain a file named: AlphaLink-Multimer_SDA_v3.pt
Only for the developers who would like to modify AlphaPulldown's codes and test their modifications. Please add your SSH key to your GitHub account
Clone the GitHub repo
git clone --recurse-submodules [email protected]:KosinskiLab/AlphaPulldown.gitcd AlphaPulldown git submodule init git submodule update
Create the Conda environment as described in Create Anaconda environment
Install AlphaPulldown package and add its submodules to the Conda environment (does not work if you want to update the dependencies)
source activate AlphaPulldowncd AlphaPulldown pip install .pip install -e . --no-deps pip install -e ColabFold --no-deps pip install -e alphafold --no-deps
You need to do it only once.
When you want to develop, activate the environment, modify files, and the changes should be automatically recognized.
Test your package during development using tests in test/
, e.g.:
pip install pytest pytest -s test/ pytest -s test/test_predictions_slurm.py pytest -s test/test_features_with_templates.py::TestCreateIndividualFeaturesWithTemplates::test_1a_run_features_generation
Before pushing to the remote or submitting a pull request:
pip install .pytest -s test/
to install the package and test. Pytest for predictions only works if SLURM is available. Check the created log files in your current directory.
Note
If you work with proteins from model organisms you can directly download the features files from the AlphaPulldown Features Database and skip this step.
This is a general example of create_individual_features.py
usage. For information on running specific tasks or parallel execution on a cluster, please refer to the corresponding sections of this chapter.
At this step, you need to provide a protein FASTA format file with all protein sequences that will be used for complex prediction.
Example of a FASTA file (sequences.fasta
):
>proteinA
SEQUENCEOFPROTEINA
>proteinB
SEQUENCEOFPROTEINB
Activate the AlphaPulldown environment and run the script create_individual_features.py
as follows:
source activate AlphaPulldown create_individual_features.py --fasta_paths=--data_dir= --output_dir= --max_template_date=
Instead of
provide a path to your input FASTA file. You can also provide multiple comma-separated files.
Instead of
provide a path to the genetic database (see 0. Alphafold-databases of the installation part).
Instead of
provide a path to the output directory, where your features files will be saved.
A date in the flag --max_template_date
is needed to restrict the search of protein structures that are deposited before the indicated date. Unless the date is later than the date of your local genomic database's last update, the script will search for templates among all available structures.
Features calculation script create_individual_features.py
has several optional FLAGS:
--[no]save_msa_files
: By default is False to save storage stage but can be changed into True. If it is set to True
, the program will create an individual folder for each protein. The output directory will look like:
output_dir
|- proteinA.pkl
|- proteinA
|- uniref90_hits.sto
|- pdb_hits.sto
|- etc.
|- proteinB.pkl
|- proteinB
|- uniref90_hits.sto
|- pdb_hits.sto
|- etc.
If save_msa_files=False
then the output_dir
will look like:
output_dir
|- proteinA.pkl
|- proteinB.pkl
--[no]use_precomputed_msas
: Default value is False
. However, if you have already had MSA files for your proteins, please set the parameter to be True and arrange your MSA files in the format as below:
example_directory
|- proteinA
|- uniref90_hits.sto
|- pdb_hits.sto
|- ***.a3m
|- etc
|- proteinB
|- ***.sto
|- etc
Then, in the command line, set the output_dir=/path/to/example_directory
.
--[no]skip_existing
: Default is False
but if you have run the 1st step already for some proteins and now add new proteins to the list, you can change skip_existing
to True
in the command line to avoid rerunning the same procedure for the previously calculated proteins.
--seq_index
: Default is None
and the program will run predictions one by one in the given files. However, you can set seq_index
to a different number if you wish to run an array of jobs in parallel then the program will only run the corresponding job specified by the seq_index
. e.g. the program only calculates features for the 1st protein in your FASTA file if seq_index
is set to be 1. See also the Slurm sbatch script above for an example of how to use it for parallel execution. ❗ seq_index
starts from 1.
--[no]use_mmseqs2
: Use mmseqs2 remotely or not. Default is False.
FLAGS related to TrueMultimer mode:
--path_to_mmt
: Path to directory with multimeric template mmCIF files.
--description_file
: Path to the text file with descriptions for generating features. Please note, the first column must be an exact copy of the protein description from your FASTA files. Please consider shortening them in FASTA files using your favorite text editor for convenience. These names will be used to generate pickle files with monomeric features!
The description.csv for the NS1-P85B complex should look like:
>sp|P03496|NS1_I34A1,3L4Q.cif,A
>sp|P23726|P85B_BOVIN,3L4Q.cif,C
In this example, we refer to the NS1 protein as chain A and to the P85B protein as chain C in multimeric template 3L4Q.cif.
Please note, that your template will be renamed to a PDB code taken from _entry_id. If you use a *.pdb file instead of *.cif, AlphaPulldown will first try to parse the PDB code from the file. Then it will check if the filename is 4-letter long. If it is not, it will generate a random 4-letter code and use it as the PDB code.
--threshold_clashes
: Threshold for VDW overlap to identify clashes. The VDW overlap between two atoms is defined as the sum of their VDW radii minus the distance between their centers. If the overlap exceeds this threshold, the two atoms are considered to be clashing. A positive threshold is how far the VDW surfaces are allowed to interpenetrate before considering the atoms to be clashing. (default: 1000, i.e. no threshold, for thresholding, use 0.6-0.9).
--hb_allowance
: Additional allowance for hydrogen bonding (default: 0.4) used for identifying clashing residues to be removed from a multimeric template. An allowance > 0 reflects the observation that atoms sharing a hydrogen bond can come closer to each other than would be expected from their VDW radii. The allowance is only subtracted for pairs comprised of a donor (or donor-borne hydrogen) and an acceptor. This is equivalent to using smaller radii to characterize hydrogen-bonding interactions.
--plddt_threshold
: Threshold for pLDDT score (default: 0) to be removed from a multimeric template (all residues with pLDDT>plddt_threshold are removed and modeled from scratch). Can be used only when multimeric templates are models generated by AlphaFold.
--new_uniclust_dir
: Please use this if you want to overwrite the default path to the uniclust database.
--[no]use_hhsearch
: Use hhsearch instead of hmmsearch when looking for structure template. Default is False.
--[no]multiple_mmts
: Use multiple multimeric templates per chain. Default is False.
The result of create_individual_features.py
run is pickle format features for each protein from the input FASTA file (e.g. sequence_name_A.pkl
and sequence_name_B.pkl
) stored in the output_dir
.
Note
The name of the pickles will be the same as the descriptions of the sequences in FASTA files (e.g. >protein_A
in the FASTA file will yield proteinA.pkl
). Note that special symbols such as | : ; #
, after >
will be replaced with _
.
Proceed to the next step 2.1 Basic Run.
If you run AlphaPulldown on a computer cluster, you may want to execute feature creation in parallel. Here, we provide an example of code that is suitable for a cluster that utilizes SLURM Workload Manager.
For the following example, we will use example_2_sequences.fasta
as input.
Create the create_individual_features_SLURM.sh
script and place the following code in it using vi, nano, or any other text editor. Replace input parameters with the appropriate arguments for the create_individual_features.py
script as described in Basic run or any other type of run you intend to execute:
#!/bin/bash#A typical run takes a couple of hours but may be much longer#SBATCH --job-name=array#SBATCH --time=10:00:00#log files:#SBATCH -e logs/create_individual_features_%A_%a_err.txt#SBATCH -o logs/create_individual_features_%A_%a_out.txt#qos sets priority#SBATCH --qos=low#Limit the run to a single node#SBATCH -N 1#Adjust this depending on the node#SBATCH --ntasks=8#SBATCH --mem=64000module load HMMER/3.4-gompi-2023a module load HH-suite/3.3.0-gompi-2023aeval "$(conda shell.bash hook)"module load CUDA/11.8.0 module load cuDNN/8.7.0.84-CUDA-11.8.0 conda activate AlphaPulldown# CUSTOMIZE THE FOLLOWING SCRIPT PARAMETERS FOR YOUR SPECIFIC TASK:####create_individual_features.py --fasta_paths=example_1_sequences.fasta --data_dir=/scratch/AlphaFold_DBs/2.3.2 / --output_dir=/scratch/mydir/test_AlphaPulldown/ --max_template_date=2050-01-01 --skip_existing=True --seq_index=$SLURM_ARRAY_TASK_ID#####
Make the script executable by running:
chmod +x create_individual_features_SLURM.sh
Next, execute the following commands, replacing
with the path to your input FASTA file:
mkdir logs#Count the number of jobs corresponding to the number of sequences:count=`grep ">"| wc -l`#Run the job array, 100 jobs at a time:sbatch --array=1-$count%100 create_individual_features_SLURM.sh
Example for two files (For more files, create count3
, count4
, etc., variables and add them to the sum of counts):
mkdir logs#Count the number of jobs corresponding to the number of sequences:count1=`grep ">"| wc -l`count2=`grep ">" | wc -l`count=$(( $count1 + $count2 )) #Run the job array, 100 jobs at a time:sbatch --array=1-$count%100 create_individual_features_SLURM.sh
Proceed to the next step 2.2 Example run with SLURM (EMBL cluster).
MMseqs2 is another method for homolog search and MSA generation. It offers an alternative to the default HMMER and HHblits used by AlphaFold. The results of these different approaches might lead to slightly different protein structure predictions due to variations in the captured evolutionary information within the MSAs. AlphaPulldown supports the implementation of MMseqs2 search made by ColabFold, which also provides a web server for MSA generation, so no local installation of databases is needed.
Cite: If you use MMseqs2, please remember to cite: Mirdita M, Schütze K, Moriwaki Y, Heo L, Ovchinnikov S, Steinegger M. ColabFold: Making protein folding accessible to all. Nature Methods (2022) doi: 10.1038/s41592-022-01488-1
CAUTION: To avoid overloading the remote server, do not submit a large number of jobs simultaneously. If you want to calculate MSAs for many sequences, please use MMseqs2 locally.
To run create_individual_features.py
using MMseqs2 remotely, add the --use_mmseqs2=True
flag:
source activate AlphaPulldown create_individual_features.py --fasta_paths=--data_dir= --output_dir= --use_mmseqs2=True --max_template_date=
After the script run is finished, your output_dir
will look like this:
output_dir
|-proteinA.a3m
|-proteinA_env/
|-proteinA.pkl
|-proteinB.a3m
|-proteinB_env/
|-proteinB.pkl
...
Proceed to the next step 2.1 Basic Run.
AlphaPulldown does NOT provide an interface or code to run MMseqs2 locally, nor will it install MMseqs2 or any other required programs. The user must install MMseqs2, ColabFold databases, ColabFold search, and other required dependencies and run MSA alignments first. An example guide can be found on the ColabFold GitHub.
Suppose you have successfully run MMseqs2 locally using the colab_search
program; it will generate an A3M file for each protein of your interest. Thus, your output_dir
should look like this:
output_dir
|-0.a3m
|-1.a3m
|-2.a3m
|-3.a3m
...
These a3m files from colabfold_search
are inconveniently named. Thus, we have provided a rename_colab_search_a3m.py
script to help you rename all these files. Download the script from https://github.com/KosinskiLab/AlphaPulldown/blob/main/alphapulldown/scripts/rename_colab_search_a3m.py and run:
cd output_dir python rename_colab_search_a3m.py path_to_fasta_file_you_used_as_input_for_colabfold_search
Then your output_dir
will become:
output_dir
|-proteinA.a3m
|-proteinB.a3m
|-proteinC.a3m
|-proteinD.a3m
...
Here, proteinA
, proteinB
, etc., correspond to the names in your input FASTA file (e.g., >proteinA
will give you proteinA.a3m
, >proteinB
will give you proteinB.a3m
, etc.).
NOTE: You can also provide your own custom MSA file in
.a3m
format instead of using the files created by MMSeq2 or standard HHMER. Place appropriately named files in the output directory and use the code as follows.
After this, go back to your project directory with the original FASTA file and point to this directory in the command:
source activate AlphaPulldown create_individual_features.py --fasta_paths=--data_dir= --output_dir= --skip_existing=False --use_mmseqs2=True --seq_index=
AlphaPulldown will automatically search each protein's corresponding a3m files. In the end, your output_dir
will look like:
output_dir
|-proteinA.a3m
|-proteinA.pkl
|-proteinB.a3m
|-proteinB.pkl
|-proteinC.a3m
|-proteinC.pkl
...
<