Proteomics: Tools, Techniques and Translational Potential
eBook
Published: July 3, 2025

Credit: Technology Networks
Proteomics continues to evolve rapidly, offering powerful tools to unravel complex biological systems. As protein-focused research intersects with clinical and translational needs, there's growing demand for robust workflows, high-throughput capabilities and data-driven insights.
This eBook explores key methodologies and innovations reshaping proteomics, from foundational techniques to AI-enhanced data analysis.
Download this eBook to explore:
- Emerging workflows and best practices across diverse proteomic applications
- Strategies for data integration and enhanced reproducibility
- The role of next-generation tools in translational and clinical settings
Credit: iStock
Mapping the Plasma
Proteome: Unlocking
Biomarkers for Precision
Medicine
Clinical Proteomics:
Progress and Future
Prospects With Professor
Jennifer
Van Eyk
Unlocking Nature’s
Secrets With
Metabolome Informed
Proteome Imaging
PROTEOMICS:
Tools, Techniques and Translational Potential
SPONSORED BY
CONTENTS
4
Mastering the Western Blotting
Frontier: Advanced Tips and Tricks
for Success
9
Mapping the Plasma Proteome:
Unlocking Biomarkers for Precision
Medicine
13
Current and Emerging Applications
of ELISAs
18
Clinical Proteomics: Progress and
Future Prospects With Professor
Jennifer Van Eyk
22
Innovating Drug Discovery With
Next-Generation Proteomics
26
Perfect Your Protein Prep for Mass
Spectrometry
30
Unlocking Nature’s Secrets With
Metabolome Informed Proteome
Imaging
33
Unveiling Disease-Associated
Antigens: The Role of
Immunoproteomics
34
Advancing Mass Spectrometry Data
Analysis Through Artificial Intelligence
and Machine Learning
38
Advances in Biomarker Discovery
and Analysis
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 3
TECHNOLOGYNETWORKS.COM
FOREWORD
Proteins shape cellular behavior, mark disease progression and guide our understanding of health. In
today’s rapidly evolving biomedical landscape, the ability to analyze, interpret and apply protein-centric
data has never been more crucial.
From mastering foundational techniques like western blotting and ELISAs to exploring the vast
capabilities of mass spectrometry and AI-enhanced data analysis, this eBook illuminates the workflows
and technologies driving proteomics research. Readers will gain insights into high-throughput plasma
profiling, the integration of metabolomics with spatial proteome imaging and the emergence of remote
sampling as a tool for decentralized diagnostics.
Whether you are deepening your expertise or just beginning to explore the proteomic landscape, this
eBook offers a valuable resource for understanding where the field stands today and where it’s headed.
The Technology Networks editorial team
4 PROTEOMICS
If you had to pick a few techniques to use in a
laboratory, western blotting would most likely be one
of those techniques. If you love puzzles, you probably
love western blotting. The technique is like solving
a mystery with clues hidden in the gel, antibodies,
buffers and the membrane. When you use suitable
antibodies, buffers and detection methods, you will
reveal the secrets of your protein samples. On June 28,
2024, a PubMed search for "western blot" returned
413,000 results, indicating its immense popularity.
This blotting technique's affordability and adaptability
have contributed significantly to its use in biological
research compared to other methods. Western blotting,
or immunoblotting, is simply a multi-step procedure
that utilizes highly selective antibodies that bind to
a specific protein of interest, allowing investigators
to detect a specific protein in a complex mixture of
proteins.1 Besides determining if a protein of interest
is present in a sample, western blotting can also allow
you to determine the amount of protein, the molecular
weight of the protein and detect modifications on
proteins.1 The most common protein modifications that
can be detected include phosphorylation, glycosylation,
ubiquitination, acetylation and methylation. The key to
detecting protein modifications is having an antibody
specific for the modification on the protein of interest.
The main steps of a western blotting experiment
include sample preparation, applying the sample to a
polyacrylamide gel, which separates proteins based on
size, protein transfer from the polyacrylamide gel to a
membrane (typically nitrocellulose or polyvinylidene
fluoride (PVDF)), blocking non-specific binding sites
Mastering the Western Blotting
Frontier: Advanced Tips and
Tricks for Success
Aldrin V. Gomes, PhD
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 5
on the membrane, incubation of the membrane with
an antibody that is specific for the protein of interest,
incubation of the membrane with secondary antibodies
that interact with the primary antibody and detection of
the signal generated by the secondary antibody.
While western blotting is one of my favorite techniques,
it has two main downsides: the high dependency of the
method on the availability of a specific antibody and the
thousands of possible variations of protocols for the
technique. To help reduce the variability in procedures
for new users, we have a detailed protocol that has been
optimized over 15 years — it is available on protols.io.2
While a quick search online can find tips for each step
of the western blotting procedure, I will discuss hidden
gems that are essential for mastering western blotting
skills.
1) The amount of protein loaded
into each well is critical
In a blotting experiment, while the quantity and
characteristics of the antibody play a role in the final
results, equally important is the amount of protein
loaded into each gel well. Detecting a protein within its
signal range relies on its abundance and the total protein
amount in each lane. It is essential to optimize the
detection of a protein within its linear intensity range for
the best outcomes. Some studies from our laboratory
suggest that loading 10–20 μg of total protein per lane
is ideal for most proteins.3,4 For about 40 different
antibodies, our laboratory typically uses 10–20 μg of
total protein per lane. While we no longer use more
than 20 μg of total protein for western blots, in some
experiments, to test the limits of total protein loading,
we found that when wells contain more than 80 μg of
protein, the pattern of protein bands is often distorted
relative to how they run when lower concentrations of
total protein are used.
2) Be careful not to overload wells
While some scientists try to maximize the volume they
load into wells, loading more than 85% of the maximum
capacity of a well can lead to non-reproducible results. If
a well can hold a maximum of 15 ml of a sample, loading
more than 13 ml of a sample often causes spillover into
nearby lanes. In our lab, we often observe spillage into
wells when loading 15 μl of a sample into a 15 μl well.
While this spillover is not always apparent, it could be
detected by loading sample buffer without any protein
into the lanes next to the well containing the maximum
well loading capacity and doing a western blot for a
highly abundant protein.
3) All antibodies need validation
Every antibody that will be used for western blotting
requires validation.5,6 We use VCV (verification,
confirmation, validation) for antibodies used in
our laboratory. While some antibodies come with
validation documentation from suppliers, many lack
this validation. VCV plays a role in enhancing the
reproducibility of antibodies that are not properly
validated. While having a validated antibody specific
to your protein of interest does not ensure an excellent
western blotting result, using an unvalidated, nonspecific
antibody ensures that your result will be
inaccurate.
VCV enhances the reproducibility of antibodies that
haven't undergone validation.
It is vital to verify that an antibody can accurately
identify the target protein at its expected weight in
controls and exhibit no signal at that weight in negative
controls.
The specificity of an antibody can be confirmed by
testing it against an epitope matching peptide used in
its development. Specificity refers to the ability of an
antibody to bind to an epitope or antigen. This peptide
can often be obtained from the manufacturer of the
antibody.
The reproducibility of western blots using a specific
antibody can be validated by doing at least two
independent experiments to confirm consistent results
within the same laboratory.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 6
4) Use a gel percentage optimized
for your target of interest
The composition percentage of gels determines
pore size, influencing how proteins migrate during
electrophoresis. The higher the percentage of
acrylamide, the smaller the pore size, resulting in slower
mobility or exclusion of larger proteins. Using a single
gel percentage is advantageous for separating proteins
with similar sizes effectively. It's essential to remember
that acrylamide is a neurotoxin; so it's crucial to wear
gloves for safety.
Table 1: Rough guidelines for selecting a suitable
polyacrylamide gel percentage according to the protein
size for tris-glycine gels.
Protein Size (kDa)
Optimal Polyacrylamide Gel
Percentage
4–40 20%
12–45 15%
10–70 12.5%
15–100 10%
40–200 8%
60– > 200 4–6%
While single-percentage gels are great for separating
bands that are close in molecular weight, for unfamiliar
or unknown samples, it is best to use a gradient gel,
such as 4–20% (separates 10–250 kDa), for evaluation
of the sample, and then utilize an appropriate singlepercentage
gel once the size range of protein has been
determined (see Table 1).
5) Optimize transfer for the target
protein of interest
Most scientists I asked use the same set of transfer
conditions regardless of what protein of interest
they are investigating. Various factors such as the
composition of the transfer buffer, transfer duration, size
of the membrane, type of membrane and thickness of
the gel influence how proteins are transferred from gels
to membranes. Thinner gel layers enable the transfer of
proteins to membranes more easily compared to thicker
gels, while a smaller pore size (0.2 μm) is recommended
for nitrocellulose membranes when blotting proteins
under 15 kDa. Dystrophin, with a molecular weight
of 427 kDa, can be effectively transferred to a
nitrocellulose membrane under constant mAs (300) and
refrigerated at 4 °C for 18 hours.7 On the other hand,
certain small proteins may pass through the membrane
depending on the conditions under which they are
transferred.
6) Some antibodies work best on
autoclaved or cross-linked proteins
Some proteins, such as ubiquitin, seem to renature after
transfer to membranes. Autoclaving the membrane
or cross-linking the proteins on the membrane with
glutaraldehyde has been shown to improve the
sensitivity of western blotting for ubiquitin.6,8,9 In
one case, the pre-treatment of the blot with 0.5%
glutaraldehyde drastically increased the signal intensity
of free ubiquitin and polyubiquitinated proteins.6
7) Use total protein staining
instead of housekeeping proteins
for normalization
Everyone who does western blotting knows the proteins
actin, glyceraldehyde 3-phosphate dehydrogenase
(GAPDH) and tubulin. These are the housekeeping
proteins that are most commonly used for the
normalization of western blots. However, many studies
have disproved that housekeeping proteins don't vary
with experimental conditions. Another disadvantage
of many housekeeping proteins is that they saturate
the blot at lower total protein concentrations than
proteins that the user is interested in quantifying.5,10
Some housekeeping proteins were found to be saturated
at a total protein concentration of less than 10 mg, a
total protein amount that is less than most laboratories
use.11 Therefore, normalizing the signal intensity of the
protein of interest against a housekeeping protein can
sometimes lead to errors. Total protein staining, which
measures the protein content in each lane, proves more
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 7
reliable for normalizing western blots as it does not
rely on any specific reference protein.10,12 Our data and
literature strongly suggest that total protein staining
is more accurate and reproducible than housekeeping
proteins for normalization of western blots. The most
common total protein staining method is Ponceau S.
8) When doing western blotting to
check for specific modifications,
use inhibitors in the sample
preparation buffer to prevent these
modifications from being removed
The number and quality of antibodies available to
detect modifications on proteins continues to increase
significantly. Unfortunately, in my experience, I have
noticed that some labs that use blotting to detect
translational modifications may not be optimizing the
procedure effectively. For instance, it's crucial to include
phosphatase inhibitors in the samples to prevent
protein dephosphorylation during preparation. When
studying modifications, like ubiquitination, methylation
and acetylation, adding proteasome, demethylase
and deacetylase inhibitors respectively is necessary.13
For glycosylation sites, iodoacetamide and sodium
fluoride can be used to inhibit O-linked glycosidases,
while deoxynojirimycin can be used to inhibit N-linked
glycosidases. Conducting all sample preparations on
ice or in a cold room is also essential. When researchers
use inhibitors, this information should be included
in manuscripts to encourage others to use these
techniques for better results.
9) The buffer used to wash blots and dilute
antibodies matters
If you spend a few minutes on PubMed looking at the
western blot methods section of manuscripts, you will
notice that most researchers use a buffer containing 3–5%
non-fat milk for diluting antibodies. However, a buffer
containing 1% non-fat milk gives better results. Along
with the amount of non-fat milk, the presence of sodium
chloride in the buffer plays a significant role. In our lab,
we once encountered a two-week period where all our
western blots showed a weak signal. After troubleshooting,
the poor results were due to a tris buffered saline (TBS)
solution obtained from a commercial manufacturer, which
contained 500 mM sodium chloride instead of 150 mM,
which is commonly used.
Final words: Understanding how the western blotting
procedure could be optimized will help everyone using
this technique obtain more reproducible and visually
attractive images! Give it a try and see your efforts
rewarded with reliable results!!
REFERENCES
1. Sule R, Rivera G, Gomes AV. Western blotting (immunoblotting):
History, theory, uses, protocol and problems. Biotechniques.
2023;75:99-114. doi:10.2144/btn-2022-0034
2. Sule R, Rivera G, Gomes AV. Detailed western blotting
(immunoblotting) protocol. Protocols.io. https://www.protocols.
io/view/detailed-western-blotting-immunoblotting-protocolyxmvmn2rng3p/
v1. Published 2022. Accessed June 25th, 2024.
3. Sule RO, Phinney BS, Salemi MR, Gomes AV. Mitochondrial and
proteasome dysfunction occurs in the hearts of mice treated with
triazine herbicide prometryn. Int J Mol Sci. 2023;24. doi:10.3390/
ijms242015266
4. Ghosh R, Gilda JE, Gomes AV. The necessity of and strategies for
improving confidence in the accuracy of western blots. Expert Rev
Proteomics. 2014;11:549-560. doi:10.1586/14789450.2014.939635
5. Brooks HL, de Castro Brás LE, Brunt KR, et al. Guidelines on
antibody use in physiology research. Am J Physiol Renal Physiol.
2024;326:F511-F533. doi:10.1152/ajprenal.00347.2023
6. Gilda JE, Ghosh R, Cheah JX, West TM, Bodine SC, Gomes AV.
Western blotting inaccuracies with unverified antibodies: Need
for a western blotting minimal reporting standard (WBMRS). PLoS
One. 2015;10: e0135392. doi:10.1371/journal.pone.0135392
7. Taylor LE, Kaminoh YJ, Rodesch CK, Flanigan KM. Quantification
of dystrophin immunofluorescence in dystrophinopathy muscle
specimens. Neuropathol Appl Neurobiol. 2012;38:591-601.
doi:10.1111/j.1365-2990.2012.01250.x
8. Swerdlow PS, Finley D, Varshavsky A. Enhancement of
immunoblot sensitivity by heating of hydrated filters. Anal
Biochem. 1986;156:147-153. doi:10.1016/0003-2697(86)90166-1
9. Emmerich CH, Cohen P. Optimising methods for the preservation,
capture and identification of ubiquitin chains and ubiquitylated
proteins by immunoblotting. Biochem Biophys Res Commun.
2015;466:1-14. doi:10.1016/j.bbrc.2015.08.109
10. Gilda JE, Gomes AV. Stain-free total protein staining is a superior
loading control to beta-actin for western blots. Anal Biochem.
2013;440:186-188. doi:10.1016/j.ab.2013.05.027
11. Moritz CP. Tubulin or not tubulin: Heading toward total protein
staining as loading control in Western blots. Proteomics. 2017;17.
doi:10.1002/pmic.201600189
12. Eaton SL, Roche SL, Hurtado ML, et al. Total protein analysis as
a reliable loading control for quantitative fluorescent western
blotting. PLoS One. 2013;8. doi:10.1371/journal.pone.0072457
13. Mishra M, Tiwari S, Gomes AV. Protein purification and analysis:
Next generation western blotting techniques. Expert Rev
Proteomics. 2017;14:1037-1053. doi:10.1080/14789450.2017.1388
167
Western blotting, resolved —free tips and techniques from our experts.Find out about the latest advances in western blottingThis fun-to-read western blotting guide will bring you up to speed on all the new advances in western blotting, including Stain-Free technology, total protein normalization, 3-minute rapid transfer, and 5-minute rapid universal blocking, as well as state-of-the-art advances in digital imaging. Request a free PDF or hard copy now. bio-rad.com/WesternGuide#ScienceForward
9 PROTEOMICS
Advances in high-throughput proteomics now allow
researchers to profile circulating plasma proteins at
unprecedented depth and scale – accelerating efforts
to identify novel biomarkers for the early detection
of disease, monitoring and predicting response to
treatment.
However, the complexity of the plasma proteome poses
challenges: it contains a diverse array of proteins that
vary considerably in abundance.
“Proteins exhibit significant inter-individual variability,
particularly in blood, so large-scale analyses are crucial
to uncover disease-specific changes and transition these
discoveries into the clinic,” says Christoph Messner,
assistant professor for precision proteomics at the
University of Zurich, Switzerland.
Researchers employ a range of next-generation tools for
high-throughput analysis of hundreds to thousands of
blood samples to identify and validate potential diseaserelated
biomarkers.
“We rely on mass spectrometry (MS)-based
proteomics to detect and quantify the highly abundant
proteins, while multiplex affinity-based assays allow
us to profile those at lower concentrations,” says
Fredrik Edfors Arfwidsson, assistant professor and
docent in biotechnology at the KTH Royal Institute
of Technology, Sweden. “By combining these
complementary technologies, we can capture a more
comprehensive view of the plasma proteome.”
Mapping the Plasma Proteome:
Unlocking Biomarkers for
Precision Medicine
Alison Halliday, PhD
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 10
Next-generation tools for
proteomics
MS-based proteomic workflows are well-established
in research laboratories and are routinely used for
biomarker discovery and profiling. Over the past
decade, dramatic advances in technology have led to
faster, more sensitive instruments capable of identifying
thousands of proteins and peptides in a single run.
“You can scan much faster now, which has significantly
increased throughput,” says Messner. “The latest
instruments also enable unprecedented proteome
depth, allowing us to detect a vast number of proteins
within remarkably short measurement times.”
In recent years, the introduction of data-independent
acquisition (DIA) methods has also been a game
changer for MS analyses.
“This was a really important step forward,” expresses
Messner. “DIA enables shorter measurement gradients –
reducing analysis times and increasing throughput. The
results are also more reproducible than conventional
data-dependent acquisition (DDA) methods.”
Affinity-based assays that use binding to target proteins
to enable the simultaneous detection and quantification
of hundreds or even thousands of specific proteins from
a single sample, offer a powerful alternative to MS-based
proteomics. The number of assays on these highlymultiplexed
proteomic platforms has increased ten-fold
over the past 15 years, rising from fewer than 1,000 to
over 11,000.
One of these high-throughput proteomics platforms
uses the proximity extension assay (PEA), which
combines oligonucleotide-linked antibodies with
quantitative real-time PCR.
“These multiplex panels have expanded dramatically,
thanks to antibody barcoding,” explains Edfors
Arfwidsson. “We can now measure up to 5,400
proteins from just a single drop of blood, or less than 10
microliters of plasma.”
Another tool for large-scale proteomics employs
aptamers – nucleotide-based compounds with high
protein-binding specificity and sensitivity – instead of
antibodies. The latest version enables the simultaneous
measurement of over 11,000 proteins from just 55
microliters of plasma.*
Large-scale plasma proteomics
As a Scientific Director of the Human Blood Atlas, part
of the wider Human Protein Atlas, Edfors Arfwidsson
is at the forefront of efforts to use advanced proteomic
tools for the large-scale profiling of plasma proteins in
health and disease.
“We initially focused on profiling circulating proteins in
healthy individuals, which revealed that each person has
a unique proteomic fingerprint that remains remarkably
stable over time,” says Edfors Arfwidsson. “This insight
led us to expand our work and start to explore different
disease states.”
In 2015, the team published a landmark study analyzing
the plasma proteome in cancer patients. Using the
PEA, they measured 1,463 proteins in minute amounts
of blood from more than 1,400 cancer patients across
12 cancer types collected at the time of diagnosis and
before treatment.
“By applying machine learning, we identified biomarker
panels that could effectively differentiate between
different cancer types and even stage colorectal
cancers,” says Edfors Arfwidsson. “This was achieved
from just one drop of blood.”
Building on this work, the Disease Blood Atlas
was launched at the most recent Human Proteome
Organization (HUPO) annual meeting. This openaccess
resource contains next-generation blood profiling
data across 59 different diseases and 6,121 patients,
covering cardiovascular, metabolic, cancer, psychiatric,
autoimmune, infectious and pediatric diseases.
“It’s fascinating to observe how some well-known
blood-based biomarkers behave across different
diseases,” says Edfors Arfwidsson. “For example, we
found that a cancer biomarker was also upregulated in
certain infections.”
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 11
Other large-scale proteomics studies are generating
independent datasets that will enable cross-validation of
candidate biomarkers across different populations.
One such study examined the impact of common
genetic variation on circulating blood proteins and
their role in disease. The researchers measured the
abundance of 2,293 proteins in 54,219 participants,
uncovering over 14,000 associations between common
genetic variants and plasma proteins, over 80% of which
were previously unknown. This vast dataset is now
accessible to scientists around the world through the
UK Biobank.
Another team applied high-throughput MS to map the
plasma proteome for sepsis, a life-threatening condition
caused by a dysregulated host response to infection
leading to organ failure. By analyzing 2,612 samples
from 1,611 patients, they identified potential biomarkers
that could help pave the way for precision medicine
approaches to improve sepsis diagnosis and treatment.
Researchers from the UK and China found that
loneliness is linked to a higher risk of illnesses such as
heart disease, stroke, type 2 diabetes and susceptibliity
to infection. They drew this conclusion after studying
data from over 42,000 participants across nearly 3,000
plasma proteins in the UK Biobank.
The scale of these plasma proteomics studies continues
to grow, promising to revolutionize our understanding
of diseases and their treatments. The UK Biobank
recently launched the world’s most comprehensive
study of plasma proteins to date, aiming to measure up
to 5,400 proteins across 600,000 samples, including
those from half a million participants and 100,000
follow-up samples collected up to 15 years later. This
unparalleled large-scale initiative will create a first-ofits-
kind database, enabling researchers to investigate
how changes in blood protein levels during mid-to-late
life influence an individual’s disease risk.
Translational challenges
While large-scale proteomics studies are very powerful
tools for plasma biomarker discovery, significant
barriers remain in translating these findings into clinical
applications.
A big challenge lies in integrating and analyzing the
enormous datasets generated to identify panels of
protein biomarkers associated with specific disease
phenotypes.
“We have a core team of four bioinformaticians working
solely on these datasets, and we’re only just scratching
the surface of their potential,” says Edfors Arfwidsson.
“We’ll probably spend another 5 or 10 years integrating
these data with other resources.”
The adoption of artificial intelligence (AI) and machine
learning tools has significantly enhanced the capacity
to interpret these complex datasets. For example, in
MS-based proteomics, machine learning – particularly
deep learning – can now predict experimental peptide
measurements from amino acid sequences alone.
However, while these tools excel in pattern recognition
and predictive modeling, their success relies on access
to large, high-quality annotated datasets that are often
limited in proteomics.
Beyond data handling and analysis, achieving regulatory
approval poses additional challenges. Validation of
protein biomarkers requires rigorous evidence of their
accuracy, specificity, sensitivity and reproducibility
across diverse patient populations. However, the lack
of standardized experimental protocols, data formats
and quality control measures often undermines
reproducibility and comparability across studies.
“These advanced proteomics technologies, combined with
AI and machine learning, are generating a lot of promising
data – but we’re still missing the tools we need to validate
these findings,” says Edfors Arfwidsson. “We’re identifying
promising biomarker panels, but it’s really hard to crossvalidate
them because of the lack of standardization.”
Addressing these hurdles will require coordinated
efforts between researchers, industry and regulatory
bodies. Establishing standardized frameworks and
robust validation protocols is critical to ensuring
biomarkers can transition from the lab to the clinic
without compromising patient safety.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 12
An exciting future
Next-generation proteomics is enabling researchers to
explore the plasma proteome with unprecedented depth
and scale to deepen understanding of human health and
disease.
“It’s an exciting time to work in this field,” says Messner.
“As instrumentation and methodologies continue to
advance rapidly, proteomics will become increasingly
important for clinical applications.”
Blood-based protein biomarkers hold the potential to
revolutionize healthcare by enabling earlier disease
detection, personalized treatment strategies and more
effective monitoring of therapeutic responses.
“In the near future, I can foresee that we’ll be routinely
profiling the plasma proteome to detect early signs of
disease or other markers that can provide information
about a person’s health trajectory,” predicts Edfors
Arfwidsson. “An annual blood test to assess risk factors
or monitor specific diseases would be feasible to deploy
at scale.”
*This article is based on research findings that are yet
to be peer-reviewed. Results are therefore regarded
as preliminary and should be interpreted as such.
Find out about the role of the peer review process in
research here. For further information, please contact
the cited source.
ABOUT THE INTERVIEWEES:
Christoph Messner is an assistant professor for precision
proteomics at the University of Zurich and head of the Precision
Proteomics Center at the Swiss Insititute of Allergy and Asthma
Research (SIAF). His research focuses on the development of
high-throughput proteomics technologies and their application in
biomarker discovery and systems biology.
Fredrik Edfors Arfwidsson is an assistant professor and docent
in biotechnology with a specialization in precision medicine and
diagnostics at KTH-Royal Institute of Technology in Sweden.
His research focuses on leveraging cutting-edge proteomics
technologies for plasma profiling to deepen understanding of
human health and disease.
13 PROTEOMICS
Enzyme-linked immunosorbent assays (ELISAs) are
a cornerstone of clinical and research laboratories.
ELISAs can detect and quantify the presence of
antibodies, antigens or proteins in an enormous range of
biological and environmental samples. Generally carried
out in 96-well plates, ELISAs rely on the immobilization
of an antigen or antibody to a solid surface (i.e., the
base of the plate) and the subsequent binding of a
corresponding antibody or antigen in a specific manner.
Although the most basic form of an ELISA simply
detects the presence of a particular antigen or antibody,
quantification can be achieved by adding an antibody
conjugated to a substrate that produces a measurable
product, such as fluorescence or a color change.
ELISAs were first developed in 1971, by two
independent groups. Both of these simultaneously
developed assays were based on enzymatic reporters,
rather than the radioactive reporters used by previous
immunoassay iterations.1 These enzyme-based
immunoassays were safer and easier to perform than
radioactivity-based immune assays, and quickly
became widespread. Since then, ELISAs have been
adapted for use in a wide variety of fields for numerous
applications, from epidemiological monitoring to
environmental pollutant analysis. There are now
hundreds of commercially available ELISA kits for the
detection and quantification of antibodies and antigens.
However, research continues into the development
of novel biomarkers and ELISA-based technologies
to improve sensitivity and broaden the potential uses.
Some of these current and emerging applications and
methodologies for ELISAs are discussed below.
Current and Emerging
Applications of ELISAs
Kate Harrison, PhD
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 14
Clinical diagnostics and
epidemiology
One of the most widespread applications of ELISAs is
in clinical diagnostics and epidemiology. ELISAs can
be used to detect both antigen markers of a disease
and the antibodies against it, depending on the type of
assay selected. Therefore, they can be used to diagnose
an active infection and track the spread of a disease
in an outbreak scenario, and also identify previously
infected individuals. To help monitor disease epidemics
epidemiologically, a portable field version of an
ELISA would be advantageous for onsite diagnosis of
infectious disease. However, although several avenues
of research have been investigated to this end such as
a battery-operated ELISA system, 2 and smartphonelinked
ELISA systems nothing is, as yet, commercially
available.3,4
In addition to infectious diseases, ELISAs can also
be used to indicate the presence of auto-reactive
antibodies. Both the general presence of autoantibodies,
(using a plate with a variety of self-antigens adsorbed
to the surface), or autoantibodies related to specific
autoimmune diseases can be analyzed, making ELISAs
indispensable in autoimmune disease diagnostics.5 One
such example is the identification of rheumatoid factors
(RFs). RFs are antibodies that bind to other antibodies
and are commonly found in systemic autoimmune
diseases. RFs have diagnostic and prognostic value in
several autoimmune diseases and are part of the specific
diagnostic criteria for rheumatoid arthritis.6 For other
autoimmune diseases, like autoimmune connective
tissue diseases (CTDs), ELISAs for autoreactive
antinuclear antibodies (ANAs) are emerging as a
more sensitive, more specific and less time-consuming
alternative to the current gold standard: indirect
immunofluorescence assays.7
Despite their range and versatility, traditional ELISAs
are limited in scenarios requiring extremely high
sensitivity for nano-molar levels of sample, making them
unsuitable for early diagnosis or screening of diseases
with subtle physiological changes such as Alzheimer’s
disease (AD). However, the recent development of
nano-ELISAs is now opening more avenues for highly
sensitive detection. In a nano-ELISA, a detector
antibody and a signaling molecule such as horseradish
peroxidase (HRP) are coupled with gold nanoparticles.
These nanoparticles act as signal amplifiers, significantly
increasing the detection limit and allowing the
detection of antigens or antibodies even in the picogram
range.8 This method has been used to identify the AD
biomarker Aβ42 in serum samples at extremely low
levels.9 As cerebral spinal fluid is currently collected
for detection of AD biomarkers, this suggests future
potential for nano-ELISAs as a novel, less invasive
method for AD diagnosis.10
Biopharmaceutical and vaccine
development
Biopharmaceuticals are therapeutics derived from,
or created in, biological sources. Demand for these
therapeutics continues to grow as they revolutionize the
treatment of a wide range of diseases, including cancer
and autoimmune diseases. As they are produced from
biological sources, they must be purified to an extremely
high standard, to remove any host cell proteins (HCPs)
or residual manufacturing reagents. ELISAs are one
of the standard tools for quantifying the total levels of
residual impurities, due to their high-throughput, high
sensitivity and high specificity.11 However, they are
unable to give any information on the specific, individual
HCPs present, so are now often coupled with liquid
The recent
development of
nano-ELISAs is now
opening more avenues
for highly sensitive
detection.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 15
chromatography-mass spectrometry (LC-MS) for more
in-depth evaluation of biopharmaceutical purity.
One of the simplest uses for ELISAs is the quantification
of antibodies in biological samples. This is essential
in the development of novel vaccines. Many vaccines
protect against infection by inducing the creation of
specific antibodies, meaning quantification of antibody
levels post-vaccination is needed to assess vaccine
efficacy. ELISAs have therefore been used extensively in
the clinical development of approved vaccines, including
the ChAdOx1 nCoV-19 vaccine against SARS-CoV-2
and the recently approved R21 and RTS,S vaccines
against malaria.12,13,14
Forensic investigations
One of the most common applications for ELISAs in
forensic science is drug testing. Toxicologists can test
forensic samples for a wide range of illicit substances,
including cocaine, amphetamines, opiates, cannabinoids
and benzodiazepines. Both liquid samples, such as
blood and urine, and solid samples such as hair can
be processed for ELISA testing, for purposes such as
monitoring drug abstinence and workplace testing.15,16
ELISAs can also be used to determine if a blood sample
is human for forensic investigation by using antibodies
targeting specifically human blood components.15 By
targeting the assay at highly abundant antigens in
the blood, such as human albumin, blood can still be
identified as human by ELISA even in ancient, buried
samples over 3,000 years old.17
In addition to analyzing bodily fluids, ELISAs can also
be used in forensic investigations to help determine
a cause of death. Traumatic brain injury (TBI) is one
of the leading causes of mortality worldwide and is
responsible for approximately 30% of all injury-related
deaths.18 Post-mortem identification of TBI can be
difficult, as ante-mortem clinical diagnosis usually
involves neurological assessment. Recent research has
shown elevated levels of cerebral protein biomarkers,
such as tau protein and myelin basic protein, in blood
and cerebral spinal fluid samples from cases where TBI
was the confirmed cause of death. These markers can
therefore be used as an indicator of TBI in post-mortem
examination, even in the absence of visible central
nervous system damage.19,20,21
Food safety
ELISAs play a significant role in food safety testing
and are used to screen for contaminants and allergens.
Food allergies affect approximately 2% of the Western
population, and allergic and anaphylactic reactions to
food result in 30,000 hospital visits and 150 deaths
per year in the United States alone.22 Allergies can be
extremely sensitive. For example, even trace amounts of
peanuts can trigger an anaphylactic reaction. Therefore,
it is very important that foods labeled as allergen-free,
are indeed safe for consumption.
Cross-contact with allergens can occur at many stages
of food processing and production and is often the
Both liquid samples,
such as blood and
urine, and solid
samples such as hair
can be processed
for ELISA testing,
for purposes such
as monitoring drug
abstinence and
workplace testing.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 16
result of failure to clean machines appropriately when
switching between allergen-containing and allergen-free
foods. However, ELISAs are used by manufacturers to
detect common allergens in food products, including
nuts, meat, milk and milk products, fish and soybeans.
The highly sensitive and specific nature of ELISAs can
identify cross-contact, preventing the need for food
recalls and protecting allergic populations. However, as
large numbers of batch samples will typically need to be
tested on a regular basis, ELISAs, despite their relative
speed, can still result in bottlenecks. Novel, rapid
microfluidic-based ELISA platforms are currently being
researched to develop a method for more sensitive and
far faster allergen identification.23,24
Despite their widespread use in the field, traditional
ELISA techniques can lack the sensitivity needed to
detect extremely low, but still dangerous, levels of toxic
or microbial contaminants. However, the development
of nano-ELISAs have enabled the detection of harmful
microbes, pesticides, drug residues and pollutants
in food.25 Nano-ELISA techniques have now been
developed to identify a range of food contaminants,
including aflatoxin in corn, pathogenic Listeria
monocytogenes in milk and veterinary drug traces in
poultry.26,27,28
Environmental analysis
Although ELISAs have typically been used for biological
sample analysis, their applications in environmental
sampling is increasing, particularly for the detection and
quantification of agricultural and industrial pollutants
in water. The use of pesticides has increased steadily
over the last 30 years worldwide, resulting in increased
levels of pollution and potentially toxic agricultural
runoff.29 Often, analysis of environmental soil and
water samples for pollutants uses techniques such as
gas and liquid chromatography (GC and LC). However,
these techniques can be costly and time-consuming
compared to ELISAs, which have been shown to be
just as sensitive as chromatography methods.30 Indeed,
ELISAs, along with other immune-based assays, have
been successfully used in several large-scale water
quality surveys in the US.30
Pharmaceutical pollutants, such as hormones and
drugs, can also be assessed in both water samples
and the tissues of aquatic organisms by ELISA.31
These emerging pollutants can have a significant
effect on aquatic ecosystems, damaging animal life,
altering microbial populations leading to detrimental
algal blooms and perpetuating antibiotic resistance.
Monitoring these pollutants is essential for minimizing
harm and developing better regulatory guidelines to
prevent their release into the environment.32
REFERENCES
1. Deutschmann C, Roggenbuck D, Schierack P, Rödiger S.
Autoantibody testing by enzyme-linked immunosorbent assay-a
case in which the solid phase decides on success and failure.
Heliyon. 2020;6(1). doi: 10.1016/j.heliyon.2020.e03270
2. Singh H, Morioka K, Shimojima M, et al. A handy field-portable
ELISA system for rapid onsite diagnosis of infectious diseases.
Japan J Infect Dis. 2016;69(5):435-438. doi: 10.7883/yoken.
jjid.2015.417
3. Long KD, Yu H, Cunningham BT. Smartphone instrument for
portable enzyme- linked immunosorbent assays. Biomed Opt
Express. 2014;5(11):3792-3806. doi: 10.1364/boe.5.003792
4. Zhdanov A, Keefe J, Franco-Waite L, Konnaiyan KR, Pyayt
A. Mobile phone based ELISA (MELISA). Biosens Bioelectron.
2018;103:138-142. doi: 10.1016/j.bios.2017.12.033
5. Drijvers JM, Awan IM, Perugino CA, Rosenberg IM, Pillai S. The
enzyme-linked immunosorbent assay. In: Jalali M, Saldanha
FYL, Jalali M, eds. Basic Science Methods for Clinical Researchers.
Academic Press;2017:119-133. doi: 10.1016/b978-0-12-803077-
6.00007-2
6. Trier NH, Houen G. (2019). Determination of Rheumatoid Factors
by ELISA. In: Houen G, eds. Autoantibodies. Methods in Molecular
Biology, vol 1901. New York, NY: Humana Press. doi: 10.1007/978-
1-4939-8949-2_23
7. Alsaed OS, Alamlih LI, Al-Radideh O, Chandra P, Alemadi
S, Al-Allaf A-W. Clinical utility of ANA-ELISA vs ANAimmunofluorescence
in connective tissue diseases. Sci Rep.
2021;11(1):8229. doi: 10.1038/s41598-021-87366-w
8. Ambrosi A, Castañeda MT, Killard AJ, Smyth MR, Alegret S,
Merkoçi A. Double-codified gold nanolabels for enhanced
immunoanalysis. Anal Chem. 2007;79(14):5232-5240. doi: 10.1021/
ac070357m
9. Hu T, Lu S, Chen C, Sun J, Yang X. Colorimetric Sandwich
Immunosensor for Aβ(1-42) based on dual antibody-modified gold
nanoparticles. Sensors Actuat B Chem. 2017;243:792-799. doi:
10.1016/j.snb.2016.12.052
10. Al Abdullah S, Najm L, Ladouceur L, et al. Functional
nanomaterials for the diagnosis of Alzheimer’s disease:
Recent progress and future perspectives. Adv Funct Mater.
2023;33(37):2302673. doi: 10.1002/adfm.202302673
11. Zhu-Shimoni J, Yu C, Nishihara J, et al. Host cell protein testing
by ELISAs and the use of orthogonal methods. Biotechnol Bioeng.
2014;111(12):2367-2379. doi: 10.1002/bit.25327
12. Ramasamy MN, Minassian AM, Ewer KJ, et al. Safety and
immunogenicity of ChAdOx1 nCoV-19 vaccine administered in
a prime-boost regimen in young and old adults (COV002): A
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 17
single-blind, randomised, controlled, phase 2/3 trial. Lancet.
2020;396(10267):1979-1993. doi: 10.1016/s0140-6736(20)32466-1
13. Genito CJ, Brooks K, Smith A, et al. Protective antibody threshold
of RTS,S/AS01 malaria vaccine correlates antigen and adjuvant
dose in mouse model. NPJ Vaccines. 2023;8(1):114. doi: 10.1038/
s41541-023-00714-x
14. Datoo MS, Natama HM, Somé A, et al. Efficacy and immunogenicity
of R21/Matrix-M vaccine against clinical malaria after 2 years’
follow-up in children in Burkina Faso: A phase 1/2B randomised
controlled trial. Lancet Infect Dis. 2022;22(12):1728-1736. doi:
10.1016/s1473-3099(22)00442-x
15. Perrigo BJ, Joynt BP. Use of ELISA for the detection of common
drugs of abuse in forensic whole blood samples. Can Soc Forensic
Sci J. 1995;28(4):261-269. doi: 10.1080/00085030.1995.10757486
16. Agius R, Nadulski T. Utility of ELISA screening for the monitoring
of abstinence from illegal and legal drugs in hair and urine. Drug
Test Anal. 2014;6(S1):101-109. doi: 10.1002/dta.1644
17. Cattaneo C, Gelsthorpe K, Phillips P, Sokol RJ. Reliable
identification of human albumin in ancient bone using ELISA and
monoclonal antibodies. Am J Biol Anthropol. 1992;87(3):365-372.
doi: 10.1002/ajpa.1330870311
18. Demlie TA, Alemu MT, Messelu MA, Wagnew F, Mekonen EG.
Incidence and predictors of mortality among traumatic brain
injury patients admitted to Amhara region Comprehensive
Specialized Hospitals, Northwest Ethiopia, 2022. BMC Emerg Med.
2023;23(1):55. doi: 10.1186/s12873-023-00823-9
19. Olczak M, Poniatowski ŁA, Niderla-Bielińska J, et al. Concentration
of microtubule associated protein tau (MAPT) in urine and saliva
as a potential biomarker of traumatic brain injury in relationship
with blood–brain barrier disruption in postmortem examination.
Forensic Sci Int. 2019;301:28-36. doi: 10.1016/j.forsciint.2019.05.010
20. Olczak M, Niderla-Bielińska J, Kwiatkowska M, Samojłowicz D,
Tarka S, Wierzba-Bobrowicz T. Tau protein (MAPT) as a possible
biochemical marker of traumatic brain injury in postmortem
examination. Forensic Sci Intl. 2017;280:1-7. doi: 10.1016/j.
forsciint.2017.09.008
21. Olczak M, Poniatowski ŁA, Siwińska A, Kwiatkowska M. Postmortem
detection of neuronal and astroglial biochemical markers
in serum and urine for diagnostics of Traumatic Brain Injury. Int J
Legal Med. 2023;137(5):1441-1452. doi: 10.1007/s00414-023-02990-
7
22. Food allergies: The “big 9”. USDA Food Safety and Inspection
Service. https://www.fsis.usda.gov/food-safety/safe-foodhandling-
and-preparation/food-safety-basics/food-allergies.
Published March 21, 2024. Accessed April 2, 2024.
23. Weng X, Gaur G, Neethirajan S. Rapid detection of food allergens
by microfluidics ELISA-based optical sensor. Biosensors.
2016;6(2):24. doi: 10.3390/bios6020024
24. Uddin MJ, Bhuiyan NH, Shim JS. Fully integrated rapid microfluidic
device translated from conventional 96-well ELISA kit. Sci Rep.
2021;11(1):1986. doi: 10.1038/s41598-021-81433-y
25. Wu L, Li G, Xu X, Zhu L, Huang R, Chen X. Application of nano-
ELISA in food analysis: Recent advances and challenges. Trends
Analyt Chem. 2019;113:140-156. doi: 10.1016/j.trac.2019.02.002
26. Zhan S, Hu J, Li Y, Huang X, Xiong Y. Direct competitive ELISA
enhanced by dynamic light scattering for the ultrasensitive
detection of aflatoxin B1 in corn samples. Food Chem.
2021;342:128327. doi: 10.1016/j.foodchem.2020.128327
27. Wang W, Liu L, Song S, Xu L, Zhu J, Kuang H. Gold nanoparticlebased
paper sensor for multiple detection of 12 Listeria spp.
by P60-mediated monoclonal antibody. Food Agr Immunol.
2017;28(2):274-287. doi: 10.1080/09540105.2016.1263986
28. Song M, Xiao Z, Xue Y, Zhang X, Ding S, Li J. Development of
an indirect competitive ELISA based on immunomagnetic
beads’ clean-up for detection of maduramicin in three
chicken tissues. Food Agr Immunol. 2018;29(1):590-599. doi:
10.1080/09540105.2017.1418842
29. Agricultural consumption of pesticides worldwide from 1990
to 2021. Statista. https://www.statista.com/statistics/1263077/
global-pesticide-agricultural-use/. Published July 2023. Accessed
April 02, 2024.
30. Aga DS, Thurman EM. Environmental immunoassays: Alternative
techniques for soil and water analysis. In: Aga DS, Thurman EM,
eds. Immunochemical Technology for Environmental Applications.
ACS Sym Ser;1997:1-20. doi: 10.1021/bk-1997-0657.ch001
31. Jaria G, Calisto V, Otero M, Esteves VI. Monitoring pharmaceuticals
in the aquatic environment using enzyme-linked immunosorbent
assay (ELISA)—a practical overview. Anal Bioanal Chem.
2020;412(17):3983-4008. doi: 10.1007/s00216-020-02509-8
32. Khan AH, Barros R. Pharmaceuticals in water: Risks to aquatic life
and remediation strategies. Hydrobiology. 2023;2(2):395-409. doi:
10.3390/hydrobiology2020026
18 PROTEOMICS
“As a scientific discipline, proteomics isn’t really that
old – it’s like protein biochemistry gone crazy,” Dr.
Jennifer Van Eyk, professor of cardiology, director of
the Advanced Clinical Biosystems Institute and the
Erika Glazer Endowed Chair in Women’s Heart at
Cedars-Sinai Medical Center, said.
Prof. Van Eyk’s journey in clinical proteomics began
when she was a peptide chemist: “I was a minimalist,
taking big proteins and breaking them down to their
fundamental amino acids.”
In peptide chemistry, proteins are typically studied in
isolation using techniques such as sodium dodecylsulfate
polyacrylamide gel electrophoresis or
Edman degradation. In the latter portion of the
20th century, proteomics – the study of the proteome
– emerged as its own field. Over recent decades, it has
evolved to new heights thanks to projects such as the
Human Genome Project, the launch of organizations
such as the Human Proteome Organization
(HUPO) and increasingly higher-throughput techniques
for protein analysis.
Now an international leader in clinical proteomics, Prof.
Van Eyk is no longer a minimalist and strives to achieve
a completely different goal in her work: characterizing as
many proteins as possible at once.
What is clinical proteomics?
Clinical proteomics, as the name suggests, describes the
study of the proteome and the application of subsequent
Clinical Proteomics: Progress
and Future Prospects With
Professor Jennifer Van Eyk
Molly Coddington
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 19
insights in a clinical context. Such insights might be used
to interrogate the biological underpinnings of a disease,
identify and validate disease biomarkers, identify novel
drug targets, predict disease outcomes or understand
drug resistance mechanisms.
The proteome is dynamic and incredibly complex.
Studying it with the aspiration to translate basic
research into clinical insights is far from easy, but it’s
a field that Prof. Van Eyk has always been passionate
about:
“It’s my personal belief that if I am lucky enough to
work as a scientist, I want to be able to pay back to
society – I want to help change lives. You can do that in
a lot of different ways, and I’m choosing to do it through
medicine,” she said.
Having worked in clinical proteomics for over 25 years,
Prof. Van Eyk’s research has helped to shape the field as
we know it today. The laboratories at Cedar-Sinai are
set up to explore and quantify proteins in all their forms,
on small and large scales. Prof. Van Eyk is renowned
for her research developing technical pipelines for de
novo discovery and larger-scale quantitative mass
spectrometry (MS) methods.
Primary research in the Van Eyk laboratory has two core
goals:
• Understand the molecular mechanism underlying
acute and chronic disease and treatment therapies, and
• Develop clinically robust circulating biomarkers.
Technology Networks had the pleasure of interviewing
Prof. Van Eyk – who is also the organization’s President
– to learn more about the exciting work going on in her
laboratory, how she hopes to address bottlenecks in
clinical proteomics and what she hopes her legacy will
be in this field.
Innovations in clinical proteomics
Remote sampling devices
Prof. Van Eyk highlighted microsampling as a
particularly exciting area of research within clinical
proteomics. “Not everybody has the money and access
[to healthcare] that we have in the Western World, but
we all have the right to know the status of our health,”
she said.
“Microsampling can help science and medicine become
more inclusive, reduce costs for healthcare systems
and help people access important insights about their
health.”
Human blood is a rich source of protein biomarkers
that can be used to determine our health status at a
given time point. Repeated blood samples taken from
the same individual can also provide insights into their
health over time. The insights from such data cannot be
understated – it could facilitate the discovery of novel
biomarkers associated with disease progression, provide
insights into the effectiveness of lifestyle interventions
or monitor an individual’s response to a drug treatment.
However, current health assessments require in-clinic
venipuncture by a trained phlebotomist. The blood
sample is then safely packaged and transported to a
laboratory for processing and analysis. There are several
issues with this system: first, it’s not always convenient
“What we now know, and
what we can achieve, is just
remarkable. Every year, I see
a new level of sophistication,
innovation and thinking in
my trainees, who are so adept
at speaking the language of
this evolving field. I want
to continue watching their
capabilities grow and support
them for a little while longer
because it’s such a joy.”
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 20
for individuals to access even one clinic appointment,
let alone make repeated visits to have blood drawn
over time. Second, venipuncture can be uncomfortable,
which may discourage individuals from undergoing
blood tests.
Collectively, these factors limit accessibility to
important health insights and make it challenging to
conduct large-scale population health studies.
Imagine being able to collect a smaller sample of blood
at home, using a safe, minimally invasive device, before
packaging your sample and posting it for analysis at a
location that’s convenient for you. This vision for the
future of telehealth is driving the development of remote
sampling devices, capable of collecting important
protein biomarker data using increasingly smaller
samples of blood (microsampling).
Prof. Van Eyk has been contributing to this emerging
area of research for many years, developing robust
MS-based workflows for remote sampling devices
and exploring their viability compared to existing
approaches for health analysis. Though remote sampling
devices remain in development, Prof. Van Eyk is
enthusiastic that they can one day democratize health
care: “It’s going to be a long road, but I think we have a
responsibility to help people access their data and have
some level of control over their health.”
The Molecular Twin
In oncology, Prof. Van Eyk shared a novel precision
medicine platform known as the “Molecular Twin”.
Precision oncology is an innovative approach that
considers a patient’s unique molecular makeup
when designing and implementing their treatment plan.
Genetic insights provide one layer of detail, but even
with the same genetic mutations, some cancer patients
respond differently to identical treatment plans.
At Cedars-Sinai, Prof. Van Eyk and Prof. Dan
Theodorescu and many other colleagues helped to build
Molecular Twin by integrating clinical and multiomic
features from patients with resected pancreatic ductal
adenocarcinoma. Multiomic features were gathered
using techniques including next-generation sequencing,
full-transcriptome RNA sequencing, paired tumor tissue
proteomics, unpaired plasma proteomics, lipidomics and
computational pathology. This data was integrated and,
with a helping hand from machine-learning models,
accurately predicted disease survival rates. “It turned
out that plasma proteomics was a very good indicator of
disease survival,” Prof. Van Eyk said.
By continuing to build Molecular Twins across multiple
cancer types, she hopes that clinicians can compare
incoming cancer patients with their “twin” data from
the platform and better tailor their treatments: “We will
know who [from the database] the patient looked like at
the beginning of their treatment, and then either follow
the same course because it was successful or move to
another course.”
Understanding drug responsiveness using
single-cell analysis
“We also have some remarkable work looking at how
patients respond to treatment in inflammatory bowel
disease (IBD),” Prof. Van Eyk said. IBD is a group of
inflammatory conditions that represent a major public
health burden. A cure does not exist, but symptoms can
be managed using medication. If medications fail to calm
inflammation, surgery may be required. IBD patients
who require surgery to remove part of the colon will
typically be prescribed anti-TNF therapy as a standard
of care.
“Unfortunately, many of these patients will become
unresponsive to that therapy within a few weeks. Then
at some point, whether it’s one year, two years or five
years after surgery, it will no longer work in all patients
and they will have to move onto another therapy,” Prof.
Van Eyk explained. Her laboratory is looking to the
proteome to identify biomarkers of unresponsiveness
and to investigate whether it’s possible to predict which
patients might benefit from changing therapies earlier.
The lab is also adopting single-cell proteomics
techniques to understand how populations of cells
respond differently to drugs. “If we think about the
heart as an example, and the cells that cause the
heart muscle to contract, we now know that there are
multiple populations of cells that almost ‘group’. When
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 21
you give these cells a drug, they might not respond
equally. If a drug fails, it might be because the individual
does not have enough of that specific population of
cells for the drug to be effective,” Prof. Van Eyk said.
“Which, of course, has important implications for drug
development. We’ll see how this research progresses
in the long run, but I think these types of technology
advancements are letting us answer questions that we
never even thought to ask before, which is very cool.”
Moving biomarkers to the clinic –
overcoming obstacles and carving a legacy
Despite incredible advances over recent decades,
translating proteomics insights to the clinic is a work
in progress. Prof. Van Eyk discussed some of the key
challenges in discovering biomarkers, validating them
and their route to clinical implementation:
“I think the idea that we can identify analytes that
closely define pathophysiology within a group of
individuals is becoming a closer reality. The challenge
of getting this research into clinical implementation is
that, often, the middle technologies or expertise in that
domain are not as well established in academia. There
are funding issues, and there isn’t always a clear route
to success. Lots of factors can influence that success,
which ultimately has nothing to do with whether you
have a good biomarker.”
Prof. Van Eyk emphasized that, even if researchers
create an assay and gather sufficient data to prove its
usefulness, there is still a long way to go for that assay
to become a part of standard care. It’s a process that
requires input from a group of different communities,
and oftentimes, the original researchers working on the
discovery aspect are not involved in it. Clinicians also
need to be able to inform researchers completing the
discovery work – and those responsible for its funding
– about the key questions that need to be answered
through discovery research.
“It has to be a two-way conversation,” Prof. Van Eyk
said. While knitting these communities together isn’t
easy, it’s going to be critical for the future of the field,
she emphasized: “I think a lot of clinical proteomics over
recent decades has been about developing tools that
ensure our data is reproducible. Now, we’re at the point
where we need to engage in conversations about how
this data is actually used. Those conversations need to
happen globally within the field.”
Looking to the future, Prof. Van Eyk is determined to
continue her work addressing these bottlenecks. “I think
the legacy that I’d like to leave in this field is that I made
it easier for researchers to get their biomarkers to the
clinic efficiently. That way, they don’t have to go through
all the learning processes that I did,” she said.
A highly respected mentor in proteomics, Prof. Van Eyk
is passionate about using her expertise and experience
to support early-career researchers.
“When I first started in proteomics, it wasn’t even a
word,” she laughed. “What we now know, and what we
can achieve, is just remarkable. Every year, I see a new
level of sophistication, innovation and thinking in my
trainees, who are so adept at speaking the language of
this evolving field. I want to continue watching their
capabilities grow and support them for a little while
longer because it’s such a joy.”
MEET THE INTERVIEWEE
Jennifer Van Eyk, PhD is an international leader in the area of
clinical proteomics. She is professor of cardiology, director of
the Advanced Clinical Biosystems Institute and the Erika Glazer
Endowed Chair in Women’s Heart at Cedars-Sinai Medical Center.
22 PROTEOMICS
Significant hurdles are still faced in drug discovery,
with most known disease targets being classified as
“undruggable.” These lack the structural features ideal
for small molecule binding, such as deep, well-defined
ligand binding pockets. Instead, undruggable proteins
often have flat or highly dynamic surfaces without
obvious binding hotspots.
Next-generation proteomics is helping to overcome
these challenges. For example, the emergence of
data-independent acquisition mass spectrometry
(DIA-MS) techniques is improving the discovery of
drugs for notoriously challenging targets. DIA-MS
is an untargeted mass spectrometry approach that
offers in-depth coverage of the proteome with high
reproducibility, accuracy and throughput.
This method not only identifies novel disease-relevant
biomarkers and targets but also enables the discovery of
novel therapeutic compounds through fragment-based
screening. In this approach, chemical probes can be
screened against the entire proteome. This MS-based
technique precisely maps target-ligand interactions in
relevant human cells and tissues.
This article discusses how fragment-based screening using
DIA-MS drives the discovery of novel drug compounds,
including covalent and reversible inhibitors. We also
explore how Evotec is expanding the druggable proteome
with its next-generation proteomics platforms, including
high-throughput activity-based protein profiling (HTABPP),
high-throughput photoaffinity labeling mass
spectrometry (HT-PALMS) and ScreenPep™.
Innovating Drug Discovery
With Next-Generation
Proteomics
Evotec
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 23
Discovering novel covalent
fragments using a DIA-MS-based
method
Covalent fragments hold the potential to drug the
undruggable with their ability to bind to shallow,
cryptic, allosteric or transient binding sites. By forming
irreversible covalent bonds with their targets, these
fragments offer high target occupancy, potency and a
prolonged duration of action.
Novel covalent fragments can be discovered using
DIA-MS-based approaches, like activity-based protein
profiling (ABPP). In ABPP, labeled chemical probes
are screened against the entire proteome, with DIAMS
used to map target reactive sites and occupancy.
This assesses probe specificity, identifying safe and
potent hits, even for challenging or poorly characterized
disease targets.
The possibilities of covalent fragment discovery are
being taken a step further with Evotec’s high throughput
ABPP (HT-ABPP) platform. Firstly, workflows are
streamlined using a robotic liquid handling platform for
peptide enrichment and clean-up. Stable isotope labeling
by amino acids in cell culture (SILAC) is used to assess
ligand-target binding in vivo, with the ability to analyze
many different human cell lines and primary cells.
This DIA-MS-based platform has high precision and
speed, allowing for the consistent mapping of 70,000+
reactive cysteine sites on 14,000+ unique proteins and
12,000+ distinct reactive lysine sites on 3,500+ unique
proteins, with a throughput of up to 180 samples per day.
Designing reversible inhibitors
using DIA-MS-based photoaffinity
labeling
DIA-MS approaches are also powerful tools to identify
reversible inhibitors. As the name suggests, this type
of drug binds reversibly with its target(s), typically via
non-covalent bonds. Reversible fragments can bind to
challenging targets, and in contrast to covalent drugs,
temporary binding to off-targets helps reduce potential
toxicity.
When combined with photoaffinity labeling, DIA-MS
can discover novel reversible inhibitors. In this method,
a library of photoreactive probes detect reversible
ligand-target interactions. These probes contain a
photoreactive diazirine group, a variable fragment and
an alkyne handle attached to a reporter or affinity tag.
Following probe screening using DIA-MS, identified
fragments can be structurally elaborated to optimize
binding affinity, specificity or activity.
High-throughput photoaffinity labeling mass
spectrometry (HT-PALMS) platforms like Evotec’s are
accelerating the discovery of safe and potent reversible
compounds. Using advanced mass spectrometry,
thousands of small molecule fragment-protein
interactions can be mapped directly in human cells.
Generated hits and their structurally elaborated analogs
can be further profiled with tailored, competitive assays
to determine structure-activity relationships, identifying
the most promising chemical starting point for your
target of interest.
Enabling large-scale proteomics
discovery
Recent advances in mass spectrometry-based
proteomics are promising to take drug discovery
even further, offering higher throughput without
compromising on coverage or precision. These advances
are being driven by Evotec’s DIA-MS-based compound
screening platform “ScreenPep.” The platform integrates
fast automation and parallelization with the ability to
process up to 1,000 samples per day.
ScreenPep has a fully automated workflow that
encompasses cell culture, sample preparation, DIAMS
data acquisition and data processing – ensuring
consistent and reliable results. Additionally, ScreenPep
can quantify various post-translational modifications,
such as phosphorylation, ubiquitination, acetylation
and methylation, enhancing the ability to identify novel
ligand-target interactions.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 24
Setting new benchmarks in drug
discovery
DIA-MS-based techniques are driving the paradigm
shift towards next-generation proteomics. These
fragment-based approaches are setting new benchmarks
in drug discovery, enabling the discovery of innovative
drug modalities, including covalent and reversible
inhibitors, for challenging disease targets across the
proteome.
At Evotec, advanced platforms – like HT-ABPP,
HT-PALMS and ScreenPep – are combined with a
team of cross-disciplinary experts and over 25 years
of experience in the field. These capabilities are
transforming modern medicine, discovering novel
ligand-target interactions with unmatched precision,
depth and quality.
Figure 1: Overview of the ScreenPep™ platform. Credit: Evotec.
Evotec’s cutting-edge proteomics solutions empower your drug discovery pipeline with actionable insights into novel targets, mechanism of action, and biomarker candidates. From breakthrough therapeutics to biomarkers, we deliver tailored proteomics approaches to accelerate your path to clinical success. Collaborate with us to transform proteomic data into innovative medicines. To learn more contact us at info@evotec.comAccelerating Your Drug Discovery with High-Throughput, Quantitative ProteomicsLearn more:
26 PROTEOMICS
Mass spectrometry (MS) is a versatile and powerful
technique for proteomics. In the last decade,
rapid developments in MS instrumentation and
methodologies have provided a new level of accuracy
and sensitivity in protein analysis, opening up new
applications in biological research, biopharmaceuticals
and diagnostics.1,2
A well-designed MS experiment can identify and
quantify unknown compounds in a biological sample.
However, it is important to ensure the quality and
reliability of your MS results by taking the time to plan
your experiment and prepare a high-quality sample.
Biological samples are complex and there is no gold
standard for sample preparation. Protocols will differ
depending on your sample type and experimental goals,
with it important to understand the different options
that are available for you to optimize your workflow.
This guide provides some essential tips and tricks
to help you as you plan your MS experiment; from
workflow optimization to practical tips about equipment
use, quality control measures and choosing the right
analytical method.
Guiding principles
Top-down or bottom-up?
Two fundamental strategies are used in protein
identification by MS: top-down and bottom-up. In
top-down analysis, intact proteins are separated from
complex biological samples and analyzed in their intact
form. This can be a useful technique for the analysis of
single proteins. The bottom-up, or “shotgun” approach,
however, is still considered the mainstay of proteomic
analysis and is used to identify unknown protein
components in a sample. Proteins are digested into
peptides, separated via liquid chromatography (LC) and
Perfect Your Protein Prep for
Mass Spectrometry
Kaja Ritzau-Reid, PhD
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 27
analyzed by MS.3 We will focus on bottom-up strategy
in this guide.
Key considerations before you start
Taking the time to properly plan your experiment before
diving into lab work is the simplest way to get on the
right track for experimental success. Here are some key
questions that you should consider before starting your
MS experiment:
• What is my biological question and how will MS
help with answering it?
• How will I acquire the sample and what handling
and storage conditions are required?
• How abundant is the protein in my sample?
• What are the appropriate controls I can use?
• What MS instrumentation is available to me?
Choose your workflow
There is no one-size-fits-all method to prepare your
protein sample for MS. The workflow you design for
a specific sample will depend on the sample source,
its complexity, the protein abundance and subcellular
location. Here are some key considerations for a
bottom-up workflow:
1. Protein extraction
Cell lysis can be achieved through mechanical- or
reagent-based methods. Mechanical-based methods
(such as grinding, liquid homogenization or sonication)
are ideal for use with biological tissue. However, this
method is generally not suitable for small sample sizes
or high-throughput experiments and reproducibility can
vary.
Reagent-based methods tend to provide a gentler
method for lysing cells, leading to higher protein
yields. This method is more suitable for small sample
sizes. However, detergents should be used carefully to
avoid damaging the cell contents and removed later to
avoid sample contamination. It may be necessary to
add protease inhibitors to your lysis buffer to prevent
protein degradation by endogenous enzymes.
2. Protein digestion and extraction
Sample digestion can be performed in two ways: in-gel
or in-solution digestion. In-solution digestion is quicker
to perform, with a greater high-throughput potential and
is typically used for small sample sizes. In-gel digestion
uses SDS polyacrylamide gel electrophoresis and
individual protein bands can be visualized by staining
and excised directly from the gel. The proteins are
reduced, alkylated and digested in situ, and peptides
are extracted directly from the gel matrix, which
simultaneously removes much of the detergents and
salts. However, this can also lead to peptide loss.
You may want to consider using filter-based strategies
such as filter-aided sample preparation, which combine
the advantages of in-gel and in-solution digestion and
provide efficient protein recovery and depletion of
contaminants using a filtration step. This is particularly
useful for samples that require the use of detergents and
salts during extraction.4
3. Peptide enrichment and cleanup
Sample cleanup is essential to remove any detergents
or salts used in the peptide isolation process, as these
will interfere with the ionization and downstream
analysis. Reverse-phase methods using the C18 matrix
are commonly used to capture hydrophobic proteins,
and there are a variety of commercial kits to choose
from. However, these are unable to remove detergents
or polyethylene glycol (PEG), and the SP2 method,
using carboxylate-modified paramagnetic particles, has
recently been described as an effective alternative.5,6
Graphite spin columns are another option, particularly
suited to hydrophilic peptides which bind poorly to the
C18 resins.
Post-digestion, your sample may require peptide
enrichment to detect low-abundance proteins or posttranslationally
modified peptides. Enrichment kits are
available that specifically target certain enzyme classes.
This should be carefully selected based on your target
peptide and binding mechanism.
In recent years several one-pot workflows have
emerged, streamlining the sample preparation steps
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 28
for MS proteomic analysis by merging some of the
steps. Consider this approach for high-throughput
applications.7,8
Keeping it compatible
Make sure that the buffers and reagents you use are
compatible with MS. Here are some tips to keep in mind
as you plan your workflow:
• Use high-quality LC-MS grade water.
• Avoid the use of strong acids (like trifluoroacetic
acid) which can mask your protein signals.
• Avoid the use of PEG-based detergents such as
TritonX, tween and NP-40 as these can interfere
with the peptide signals.
• Avoid the use of non-volatile organic solvents such
as DMF or DMSO, as these are highly viscous and
can damage instrument parts.
• Avoid the use of salts and non-volatile buffers in the
final purification steps of your sample prep.
Using the right tools for the job
Before starting your experiment, run through the
equipment that you will require. Here are some tips for
choosing the right tools for the job:
• It is important to eliminate any risk of
contamination, so using a laminar air flow hood and
protective clothes is generally recommended during
sample preparation.
• Keratin is one of the most common contaminants in
proteomics. The protein comes from your skin, so
unless you are specifically working with skin cells –
wear gloves!
• Use low-binding tubes and pipette tips. Proteins and
peptides bind to untreated plasticware and using
low-binding tubes can significantly reduce protein
and peptide loss.
• Use filter pipette tips to prevent contamination.
• Don’t use autoclaved plastic material. The heat and
pressure in the autoclaving process can degrade the
plastic leaving residues that can contaminate your
samples.
• Avoid washing any glassware that you plan to
use with detergents as this can contaminate your
sample. Use hot water and an organic solvent
instead.
Quality control
Take the time to monitor each step of your sample
preparation. This will ultimately save you time by
highlighting any problems with your sample before
you proceed with your MS analysis. Here are some key
quality control measures to consider during your sample
preparation:
• Monitor the protein expression of your input
sample, preferably at each step of your sample
preparation, using standard protein assays such as
Coomassie staining or bicinchoninic acid.
• Always check that the pH of your sample is in the
correct range.
• Use blank samples (solvents without the analyte) to
help identify any contaminants.
• Use internal standards to help correct for any
variations in your sample preparation and allow for
more precise quantification during data analysis.
Next steps
Selecting the right approach for your MS analysis is
highly dependent on your sample, experiment goals
and what is available to you. The technology in MS
instrumentation has rapidly developed in the last decade
and there are many different types of MS instruments to
choose from.9
The first step in MS analysis involves the ionization of
your analyte. The two most common ionization methods
are matrix-assisted laser desorption/ionization (MALDI)
and electrospray ionization (ESI).2 MALDI is typically
used for larger biomolecules, and the peptides are mixed
into a matrix for ionization by a UV-emitting laser. ESI
requires a liquid solution free from detergents and salts
and is commonly coupled with liquid chromatography
to separate the peptide components before they enter
the mass spectrometer. This is a popular approach,
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 29
ideal if you are dealing with complex biological samples
requiring quantitative analysis. It is commonly referred
to as a “shotgun” approach owing to its non-targeted,
broad-spectrum analysis.
Final considerations
Remember, there is no standard method for preparing
your samples for MS experiments and your workflow
will be highly dependent on your individual sample and
experiment requirements. Simplify your sample as best
you can, by removing contaminants that may suppress
or mask your peptides of interest. If possible, use a
core facility to assist you and provide advice. Careful
planning will always pay off, avoiding costly and timeconsuming
mistakes in the long run.
REFERENCES
1. Yates JR, Ruse CI, Nakorchevsky A. Proteomics by mass
spectrometry: approaches, advances, and applications.
Annu Rev Biomed Eng. 2009;11:49-79. doi: 10.1146/annurevbioeng-
061008-124934
2. Rozanova S, Barkovits K, Nikolov M, Schmidt C, Urlaub H, Marcus
K. Quantitative mass spectrometry-based proteomics: an
overview. In: Marcus K, Eisenacher M, Sitek B, eds. Quantitative
Methods in Proteomics. Vol 2228. Springer US; 2021:85-116. doi:
10.1007/978-1-0716-1024-4_8
3. Zhang Y, Fonslow BR, Shan B, Baek MC, Yates JR. Protein analysis
by shotgun/bottom-up proteomics. Chem Rev. 2013;113(4):2343-
2394. doi: 10.1021/cr3003533
4. Wiśniewski JR, Zougman A, Nagaraj N, Mann M. Universal
sample preparation method for proteome analysis. Nat Methods.
2009;6(5):359-362. doi: 10.1038/nmeth.1322
5. Waas M, Pereckas M, Jones Lipinski RA, Ashwood C, Gundry RL.
SP2: rapid and automatable contaminant removal from peptide
samples for proteomic analyses. J Proteome Res. 2019;18(4):1644-
1656. doi: 10.1021/acs.jproteome.8b00916
6. Wojtkiewicz M, Berg Luecke L, Kelly MI, Gundry RL. Facile
preparation of peptides for mass spectrometry analysis in
bottom‐up proteomics workflows. Curr Protoc. 2021;1(3):e85. doi:
10.1002/cpz1.85
7. Ye Z, Sabatier P, Martin-Gonzalez J, et al. One-Tip enables
comprehensive proteome coverage in minimal cells and single
zygotes. Nat Commun. 2024;15:2474. doi: 10.1038/s41467-024-
46777-9
8. Chen W, Adhikari S, Chen L, et al. 3D-SISPROT: A simple
and integrated spintip-based protein digestion and threedimensional
peptide fractionation technology for deep proteome
profiling. J Chromatogr A. 2017;1498:207-214. doi: 10.1016/j.
chroma.2017.01.033
9. Li C, Chu S, Tan S, et al. Towards higher sensitivity of mass
spectrometry: a perspective from the mass analyzers. Front
Chem. 2021;9:813359. doi: 10.3389/fchem.2021.813359
30 PROTEOMICS
We developed and demonstrated a new metabolomeinformed
proteome imaging (MIPI) workflow for
studying microscale microhabitats in complex
ecosystems. Our workflow combines state-of-the
art analytical instrumentation that generates spatial
metabolome and proteome-rich data to build biological
pathways.
In our roles as Pacific Northwest National Laboratory
(PNNL) scientists, we led this multi-institutional study,
which was published in Nature Chemical Biology.
Harnessing fungal communities for
sustainable plant degradation and
everyday products
A major research focus at PNNL is to understand how
natural systems efficiently perform complex tasks, such
as the breakdown of plants into high-value molecules.
The leaf-cutter ant fungal garden ecosystem serves as
a naturally evolved model for efficient plant biomass
degradation. Characterizing the degradation processes
mediated by the symbiotic fungus, Leucoagaricus
gongylophorus (L. gongylophorus), is challenging due
to the system’s dynamic metabolisms and spatial
complexity. In this study, we performed microscale
imaging across sections of the Atta cephalotes fungal
garden and used MIPI to map lignin degradation.
We focused on natural plant degradation because plants
are vital to the global economy, creating products like
fuels, medicines, insulation and food-safe packaging.
Unlocking Nature’s Secrets
With Metabolome Informed
Proteome Imaging
Kristin Burnum-Johnson, PhD, and Marija Velickovic
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 31
Scientists seek cleaner, cheaper methods to break down
tough plant materials, but current methods often leave
behind waste, such as lignin. Natural fungal communities
efficiently decompose these materials into usable
nutrients. Understanding these processes at a molecular
level can help us sustainably create everyday products.
Overcoming traditional limitations:
MIPI unveils key biochemical
pathways in fungal garden
ecosystems
Traditional methods measure metabolites, enzymes and
other molecules in bulk, providing average data that
mask detailed information. MIPI offers detailed insights
into biochemical pathways within complex biological
matrices. Utilizing matrix-assisted laser desorption/
ionization mass spectrometry imaging (MALDI-MSI),
MIPI visualizes metabolite locations and identifies
microscale activity hotspots. These hotspots are then
processed using microdroplet processing in one pot for
trace samples (microPOTS), followed by sensitive liquid
chromatography-tandem mass spectrometry (LC-MS/
MS) proteomic analyses. This approach thoroughly
examines the whereabouts, timing and molecular
participants of biochemical reactions, providing a
comprehensive view of intricate biological pathways.
Using our new imaging technique, MIPI, we examined
12 μm-thick sections of the leaf-cutter ant fungal garden
ecosystem to map lignin degradation and visualize colocalized
metabolites and proteins.
Untargeted MALDI-MSI analysis was used to look at
molecular distributions across fungal garden sections.
Using the METASPACE annotation platform and KEGG
database, we identified metabolites and mapped distinct
microscale lignin and primary metabolite microhabitats.
These microscale microhabitats underwent detailed
proteomic analysis using laser-capture microdissection
and microPOTS followed by LC-MS/MS. Spatial
metabolome and proteome data were then integrated.
The key findings of the paper were:
• Over 7,000 unique and taxon-specific peptides
were identified from microscale samples. A
diverse array of enzymes actively participating
in the breakdown of lignocellulose and aromatic
compounds were also revealed.
• A comprehensive view of metabolic pathways
within microscale regions of this complex
ecosystem was obtained from the integrated
metabolome and proteome data.
• Critical metabolites and enzymes that drive
essential biochemical reactions in plant
degradation were identified using our MIPI
method.
• Fungi were found to dominate and lignin
breakdown pathways were identified. Our
findings highlighted that fungi are the primary
degraders of plant material in the system and
provided detailed pathway-level resolution of lignin
breakdown by the fungus.
MIPI reveals fungal dominance
in plant material degradation
and detailed pathways for lignin
breakdown
Characterizing biochemical pathways in complex,
heterogeneous samples has been challenging due to
the limitations of current methods, which focus on bulk
averages and obscure finer details. Our MIPI technique
overcomes these limitations by visualizing individual
molecular components, revealing specific biochemical
reactions in plant degradation.
Using MIPI, we unveiled key metabolites and enzymes
essential for these reactions within the leaf-cutter ant
fungal garden ecosystem. We identified significant
fungal involvement in breaking down plant material,
particularly lignin. MIPI allowed us to observe colocalized
metabolites and proteins, providing detailed
pathway-level insights into lignin degradation by fungi.
Untargeted MALDI-MSI analysis highlighted
heterogeneous spatial distributions of molecular
features and identified distinct lignin and primary
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 32
metabolite microhabitats. Detailed proteomic analysis
using laser-capture microdissection and microPOTS,
followed by LC-MS/MS, revealed thousands of unique
and taxon-specific peptides. These results emphasized
fungi’s prominent role in lignocellulose degradation
through various enzymes involved in carbohydrate and
aromatic compound metabolism.
Through MIPI, we discovered that the fungus, L.
gongylophorus, is the primary degrader of plant material
and provided detailed pathways for lignin breakdown.
These insights enhance our understanding of complex
metabolic pathways, paving the way for advancements
in both environmental research and biofuel and
bioproduct development.
We now have a detailed microscale view of how plant
degradation naturally occurs within this fungal system.
To develop new methods for producing biofuels
and bioproducts, we must start with the most basic
components. By identifying which metabolites, enzymes
and other chemicals initiate reactions at specific times
and locations, we can replicate these processes in the
lab. This knowledge can inform new strategies for
creating better bioproducts and biofuels. We have
successfully pinpointed the necessary molecules
and biochemical reactions at a molecular level. With
this natural blueprint, we can now apply the same
techniques in the laboratory.
A current limitation of our MIPI workflow is the
need for multiple adjacent tissue sections, as different
modalities require specific sample slides. To address
this, future efforts will focus on enabling MIPI to
perform multi-omics profiling from a single tissue
section while maintaining or even improving the
sensitivity of all modalities.
Expanding horizons: Applying MIPI
to future research and broader
scientific endeavors
With our molecular-level understanding of these
processes, we can now apply this method to a range of
future experiments and broader scientific endeavors.
We aim to study how fungal communities respond and
protect themselves amid various disturbances, including
those caused by extreme weather. Temperature
fluctuations, shifts in precipitation patterns and
increased exposure to pathogenic microbes can have
devastating environmental impacts. Using this method,
we can investigate how these communities respond to
such stresses.
We are thrilled to have a tool that allows us to examine
these processes in unprecedented detail, opening a
whole new world of exploration.
Beyond environmental research, MIPI holds significant
promise in clinical applications, offering valuable
insights into cellular diversity and disease progression.
Future research will leverage MIPI’s capability to
map biological systems intricately, investigating
fungal responses to environmental stresses like
extreme weather. This method not only deepens our
understanding of these processes but also paves the
way for new explorations and applications in both
environmental and clinical settings.
REFERENCE
Veličković M, Wu R, Gao Y, et al. Mapping microhabitats of
lignocellulose decomposition by a microbial consortium. Nat Chem Biol.
2024;20(8):1033-1043. doi:10.1038/s41589-023-01536-7
There are many potential applications of immunoproteomics, offering
valuable insights into disease mechanisms.
While a few key examples are outlined below, the possibilities extend
far beyond these, providing a glimpse into the areas of diagnostic
medicine that can benefit from immunoproteomics.
Applications of
immunoproteomics
Click here to view the full infographic
1. Vaccine efficiency and development
4. Early-stage cancer detection
3. Diagnosis and prognosis of autoimmune diseases
2. Monitoring antigenicity of therapeutic drugs
Vaccines protect against bacterial and viral diseases by stimulating the production of
neutralizing antibodies against specific pathogens.
While vaccination has significantly reduced the incidence of many serious diseases,
challenges remain for pathogens without vaccines or for individuals with weakened
immune systems who may not respond effectively.
Immunoproteomics plays a crucial role in vaccine development and efficiency by
identifying highly effective immunogens and monitoring immune responses.
Many cancer patients produce antibodies against antigens that are specifically expressed by
malignant cells. This immune response is a valuable source of diagnostic and prognostic information.
Immunoproteomics can aid as a diagnostic approach for the early detection and monitoring of
different types of cancer. However, the timing of antibody responses during carcinogenesis remains
poorly understood, necessitating careful evaluation to minimize the risk of false-negative diagnoses.
Autoimmune diseases arise from erroneous activation of the immune system, resulting in
the attack of self-proteins within the human body. Examples of autoimmune diseases include
rheumatoid arthritis, psoriasis and systemic lupus erythematosus (SLE).
In all these conditions, immunoproteomics holds great promise to aid in the diagnosis, prognosis
and monitoring of autoimmune diseases.
Immunogenicity is a significant problem associated with protein therapeutics. Due to
artificial production processes, recombinant therapeutics are often not completely
identical to their human native counterparts. Sometimes, these therapeutics elicit an
immune response that interferes with drug affectivity.
In principle, immunoproteomic approaches could be used to define the patient’s immune
response against a panel of related (protein) drugs to aid in selective drug application to
optimize individualized treatment.
Click here to view the full infographic
34 PROTEOMICS
Since mass spectrographs and spectrometers were
introduced in the early 1900s,1 mass spectrometry
(MS) has undergone tremendous technological
improvements. Once a methodology primarily used by
chemists, MS is now an incredibly versatile analytical
technique with several applications in research
including structural biology, clinical diagnostics,
environmental analysis, forensics, food and beverage
analysis, omics and beyond.
MS produces a vast quantity of data that needs to be
analyzed. Managing, processing and interpreting these
large data outputs is computationally intensive and
often prone to errors, particularly when manual or
semi-automated processes are used. Consequently,
artificial intelligence (AI) and machine learning (ML)
have become immensely popular for processing MSgenerated
data and statistical analysis as they can be
applied to various biological disciplines,2 limit errors
and enhance data analysis.
This article delves into what MS data analysis entails
and its associated challenges, how AI/ML can aid
analyses and exciting potential future developments in
the field, with specific applications to proteomics and
metabolomics research.
Advancing Mass Spectrometry
Data Analysis Through
Artificial Intelligence and
Machine Learning
Isabel Ely, PhD
Credit: iStock
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 35
What does MS data analysis
entail?
“Data analysis in proteomics and metabolomics is
a complex, multi-step process that begins with the
collection of biological samples and culminates in the
extraction of meaningful biological insights,” Dr. Wout
Bittremieux, assistant professor in the Adrem Data
Laboratory at the University of Antwerp, said.
After the laborious sample preparation of extracting
proteins, peptides or metabolites of interest,3 they are
ionized and introduced into a mass spectrometer where
they are detected based on their mass-to-charge (m/z)
ratio, producing a mass spectrum. The coupling of MS
with other analytical tools, such as gas chromatography
and liquid chromatography, allows for the further
separation and identification of such analytes.
“One of the key challenges in MS is the accurate
annotation of MS spectra to their corresponding
molecules,” Dr. Bittremieux said.
“In proteomics, the dominant method for this task is
sequence database searching. This relies on comparing
experimental to theoretical spectra simulated from
peptides assumed to be present. However, these
theoretical spectra are often oversimplified and do not
capture detailed fragment ion intensity information, which
can lead to significant ambiguities and false identifications.”
Once data are quantified, either relatively or absolutely,
statistical analyses can take place to facilitate biological
interpretation.
“To contextualize the results, pathway analysis
tools can be used to map the identified proteins or
metabolites onto known biological pathways to help
in understanding the functional implications of the
changes observed in the data. Alternatively, biomarker
candidates can be identified based on their ability to
distinguish between different biological conditions or
groups,” Bittremieux explained.
Applying AI/ML to MS data
analysis
Although some scientists still have concerns about
large-scale AI implementation, AI and ML have become
indispensable tools for MS data analysis; aiding
clinical decisions, guiding metabolic engineering and
stimulating fundamental biological discoveries.
Applying AI/ML in MS research attempts to minimize
errors associated with data analysis –including high
noise levels, batch effects during measurements and
missing values4 – enhance usability and maximize data
outputs.5 Further, training ML models on large datasets
of empirical MS spectra allows the generation of highly
accurate predicted spectra that closely match the
experimental data.6 This overcomes the limitations of
traditional sequence database searching, which relies on
crude, theoretical spectra.
Developments in AI/ML have led to more accurate,
efficient and comprehensive interpretations of
biological data, including de novo peptide sequencing.7
“De novo peptide sequencing, which involves determining
the peptide sequence directly from tandem (MS/
MS) spectra without relying on a reference database,
is a challenging problem. ML approaches are starting
to impact this area significantly by learning patterns
from known spectra and using them to predict peptide
sequences from unknown spectra, making it feasible to
analyze complex proteomes without relying solely on
existing protein databases,” Dr. Bittremieux said.
Developments in AI/
ML have led to more
accurate, efficient
and comprehensive
interpretations of biological
data, including de novo
peptide sequencing.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 36
Another area AI/ML has been applied to in MS data
analysis is repository-scale data analysis.8 Public data
repositories have continued to expand, now containing
millions to billions of MS spectra. Despite existing data
providing ample opportunity to extract new biological
insights, the sheer volume of data presents significant
challenges in terms of data processing and analysis.
“We have developed AI algorithms capable of
performing large-scale analyses across these
repositories, identifying patterns across experiments
and detecting novel peptides and proteins that were
previously missed. This has led to discoveries that
would have been impossible with manual or traditional
computational methods.”
Recent developments in AI/ML
Although advancements in AI have been fruitful,
applying these technological developments to MS data
is challenging due to its unique nature, making a direct
translation of AI advancements to MS data non-trivial.
“One of the most significant recent advancements in AI
relevant to MS data analysis is the development of more
sophisticated deep learning models capable of handling
high-dimensional data and extracting intricate patterns,”
said Bittremieux.
“For example, transformer neural networks, which were
originally developed for natural language processing, are
now effectively used to ‘translate’ between sequences
of peaks in tandem MS spectra to sequences of amino
acids during de novo peptide sequencing. These models
can learn from vast amounts of empirical MS data,
identifying subtle features that traditional methods
might overlook.”
“Despite such advancements, the successful application
of AI to MS data still requires deep expertise in both
AI and MS. This multidisciplinary skill set remains
relatively rare, which has slowed the broader adoption
of AI in the field. However, as more researchers receive
training in both areas and as AI tools become more
accessible, we are beginning to see a new generation of
scientists capable of bridging this gap.”
Looking towards the future
Although significant advancements in AI and ML have
aided the continual development of MS data analysis,
there is still room for improvement.
“One of the key areas where I believe future
developments should be focused is on the generation
and curation of high-quality, large-scale datasets. While
advancements in AI model architectures have been
impressive, these models are only as good as the data
they are trained on,” discussed Bittremieux.
Greater availability of diverse MS data sets would
ultimately enable the development of AI tools suitable
for use across multiple experimental conditions in
differing biological topics.9
“These datasets should include comprehensive
annotations, such as accurate peptide and metabolite
identifications, quantification data and metadata
related to sample preparation and instrument settings.
This diversity will enable AI models to learn more
generalizable patterns, improving their performance
across different applications.”
Researchers are sometimes at fault for testing their
models on cherry-picked datasets. This contributes
to a lack of standardization evaluations assessing
Greater availability of
diverse MS data sets would
ultimately enable the
development of AI tools
suitable for use across
multiple experimental
conditions in differing
biological topics.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 37
the performance of different models. Dr. Bittremieux
detailed that “the development of benchmarking
suites would allow for a fair comparison of different
algorithms, fostering transparency and driving genuine
progress in the field.”
“As AI tools become more accessible and interpretable,
we will likely see a surge in innovative applications, from
personalized medicine to environmental monitoring.”
REFERENCES
1. Wilkinson DJ. Historical and contemporary stable isotope tracer
approaches to studying mammalian protein metabolism. Mass
Spectrom. Rev. 2018;37(1):57-80. doi:10.1002/mas.21507
2. Neagu AN, Jayathirtha M, Baxter E, Donnelly M, Petre BA, Darie
CC. Applications of tandem mass spectrometry (MS/MS) in protein
analysis for biomedical research. Molecules. 2022;27(8):2411.
doi:10.3390/molecules27082411
3. Luque-Garcia JL, Neubert TA. Sample preparation for
serum/plasma profiling and biomarker identification by
mass spectrometry. J. Chromatogr. A. 2007;1153(1):259-276.
doi:10.1016/j.chroma.2006.11.054
4. Liebal UW, Phan ANT, Sudhakar M, Raman K, Blank LM.
Machine learning applications for mass spectrometry-based
metabolomics. Metabolites. 2020;10(6):243. doi:10.3390/
metabo10060243
5. Beck AG, Muhoberac M, Randolph CE, et al. Recent developments
in machine learning for mass spectrometry. ACS Meas Sci Au.
2024;4(3):233-246. doi:10.1021/acsmeasuresciau.3c00060
6. Adams C, Gabriel W, Laukens K, et al. Fragment ion intensity
prediction improves the identification rate of non-tryptic peptides
in timsTOF. Nat Commun. 2024;15(1):3956. doi:10.1038/s41467-024-
48322-0
7. Yilmaz M, Fondrie WE, Bittremieux W, et al. Sequence-tosequence
translation from mass spectra to peptides with a
transformer model. Nat Commun. 2024;15(1):6427. doi:10.1038/
s41467-024-49731-x
8. Bittremieux W, May DH, Bilmes J, Noble WS. A learned embedding
for efficient joint analysis of millions of mass spectra. Nat
Methods. 2022;19(6):675-678. doi:10.1038/s41592-022-01496-1
9. Dens C, Adams C, Laukens K, Bittremieux W. Machine learning
strategies to tackle data challenges in mass spectrometry-based
proteomics. J Am Soc Mass Spectrom. 2024;35(9):2143-2155.
doi:10.1021/jasms.4c00180
ABOUT THE INTERVIEWEE
Dr. Wout Bittremieux is an assistant professor in the Adrem
Data Lab at the University of Antwerp, Belgium and a worldleading
expert in computational mass spectrometry. He leads
a research team that develops advanced artificial intelligence
and bioinformatics tools to analyze mass spectrometry-based
proteomics and metabolomics data.
38 PROTEOMICS
Biomarkers can offer critical information about an
individual’s health status, disease prognosis or how they
might respond to a treatment.
Developments in high-throughput technologies that
enable the efficient, sensitive and large-scale analysis
of biological samples have significantly advanced
biomarker research over recent years.
In this article, we highlight just some of the research
trends impacting biomarker discovery and analysis,
including advances in proteomics research, artificial
intelligence (AI) and microsampling.
Proteomics powers biomarker
discovery
Looking to the future of biomarker research, Dr.
Irene De Biase, professor of clinical pathology at the
University of Utah, expects to see an exponential
growth in studies that use omic approaches to evaluate
large datasets to identify correlations between
biomarkers and clinical outcomes: “For instance, three
months into 2025, and there are already almost 300
publications using proteomics in various cohorts of
patients with diabetes mellitus,” she told Technology
Networks.*
Advances in Biomarker
Discovery and Analysis
Molly Coddington
Credit: iStock
What is the proteome?
The proteome refers to the entire set of proteins
found in a cell, tissue or organism at a specific
point in time. While an individual’s genome is
relatively static throughout their lifetime, the
proteome is dynamic and actively responds to
both external and internal signals.
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 39
Proteins drive cellular functions, earning their nickname
as the “workhorses” of the cell. Proteomics, the largescale
study of the proteome, has emerged as a powerful
field for biomarker discovery in the post-genomic era.
Liquid chromatography-mass spectrometry (LCMS)
is the primary method used for protein biomarker
discovery due to its high level of sensitivity and
specificity. LC-MS has enabled researchers to detect
disease-specific protein signatures, analyze posttranslational
modifications (PTMs) that affect disease
mechanisms and discover novel biomarkers for
conditions such as cancer and neurodegenerative
diseases.1,2,3
As the demand for higher throughput and scalability
in biomarker research grows, recent trends in LCMS
proteomics analysis are focused on improving
and automating workflows, as well as refining sample
preparation and analytical processes.4 Kverneland et
al. recently published a fully automated workflow that
combines sample digestion, cleanup and loading, and
demonstrated its use for processing 192 HeLa cell
samples in 6 hours.5 “The workflow is optimized for
minimal sample starting amount to reduce the costs
for reagents needed for sample preparation, which is
critical when analyzing large biological cohorts,” the
researchers said.
While blood is the most commonly analyzed biofluid
due to its accessibility, valuable insights can also be
gained from other fluids such as urine, cerebrospinal
fluid (CSF), serum and breast milk. A recent paper by
Zhang et al. outlined an automated, scalable workflow
for preparing hundreds to thousands of samples,
including milk and urine, with only minor modifications
to the initial steps of a workflow originally designed
for blood plasma, serum and CSF.6 This approach
expands the scope of biomarker discovery by enabling
the automated analysis of a broader range of biological
samples.
Advances in protein enrichment and depletion methods
are also enhancing MS-based proteomics. High
abundance proteins in samples such as plasma can
mask the signal of low-abundance – but biologically
interesting – proteins. A variety of enrichment methods
are being implemented to support the discovery and
identification of low-abundance proteins that carry
biomarker potential.7 For example, Palstrøm et al.
recently performed a MS-based proteomic analysis
of plasma samples that were enriched using magnetic
p-aminobenzamidine (ABA) affinity probes.8 The
samples were gathered from 45 patients with abdominal
aortic aneurysm (AAA) – a life-threatening disorder that
sees improved outcomes with a fast diagnosis – and 45
matched controls. The researchers identified plasma
proteome alterations linked to AAA and, using machine
learning, developed a potential biomarker panel for
earlier and more accurate diagnosis.
Beyond MS-based approaches, affinity-based
techniques are increasingly being used to analyze the
proteome and uncover clinically relevant biomarkers.
These techniques rely on specific interactions between
proteins and binding agents, including aptamers and/or
antibodies.
The UK Biobank recently announced the launch of
the world’s “most comprehensive study of the proteins
circulating in our bodies”. The project will harness
an antibody-based platform to measure up to 5,400
proteins in 600,000 samples, half a million of which
will be obtained from UK Biobank participants, while
100,000 samples will be taken from the same volunteers
up to 15 years later.
“Adding proteomic data for the full UK Biobank cohort
will be an absolute game changer for prediction of
disease onset and prognosis, particularly for the many
neglected diseases for which good prospective data
are lacking,” Professor Claudia Langenberg, director of
the Precision Healthcare University Research Institute
at Queen Mary University of London, said. “These
include debilitating and life-threating diseases, such as
polycystic ovary syndrome and motor neurone disease.
Just imagine if we could detect these and many other
conditions much earlier than is currently possible.”
A recent pre-print study by Kirsher et al. compared
different plasma proteomics platforms – including
aptamer-, antibody- and MS-based approaches by
applying these methods to the same cohort of samples
covering 13,000 proteins.9** Though the study is yet to
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 40
be peer-reviewed, it suggests that trade-offs in coverage
across these platforms have implications for the future
of biomarker discovery and translational research.
Beyond proteomics, other omics fields – such as
genomics, transcriptomics and metabolomics – continue
to drive biomarker discovery, with growing efforts now
focused on integrating these data through multiomics
approaches to gain a more comprehensive view of
disease biology.
AI and machine learning
enhances biomarker research
AI-based approaches, such as deep learning and
machine learning, are finding a variety of applications in
biomarker discovery, development and validation. Such
technologies can process and integrate vast, complex
multiomics datasets more efficiently and at a higher
capacity than traditional analytical tools.
In 2022, researchers published a novel pan-cancer
proteomic map of 949 human cell lines across over
40 types of cancer.10 They used a deep learning-based
computational pipeline – Deep DeeProM – to integrate
large volumes of data. “DeeProM enabled the full
integration of proteomic data with drug responses
and CRISPR-Cas9 gene essentiality screens to build a
comprehensive map of protein-specific biomarkers of
cancer vulnerabilities that are essential for cancer cell
survival and growth,” co-author Associate Professor
Qing Zhong of the Children’s Medical Research
Institute, University of Sydney, told Technology
Networks. The analyses identified biomarkers detectable
only at the proteomic level, showing enhanced
predictive accuracy compared to models that rely solely
on gene expression data.
Deep learning has also shown success in analyzing
histopathology slides to facilitate the discovery of
novel biomarkers. A recent preprint by Shulman et
al. presents Path2Space, a deep learning model that
predicts spatial gene expression from histopathology
slides of breast cancer tumors.11** The researchers aimed
to identify microenvironment characteristics and other
spatial biomarkers associated with treatment response.
When applied to two large cohorts of breast
cancer patients treated with trastuzumab and
chemotherapy, Path2Space was able to identify spatial
biomarkers and predicted therapy response with high
accuracy.
While these examples showcase AI’s utility in oncology
biomarker research, such technologies are also
facilitating new biomarker discoveries across diseases
including Alzheimer’s, diabetes and cardiovascular
disease, among others.12,13,14
Remote sampling devices: The
future of biomarker analysis?
Measuring blood biomarkers in clinical and research
settings presents challenges, including geographical
limitations, the cost and inconvenience of in-clinic
venipuncture and infrequent sampling.
Microsampling is an emerging, minimally invasive
alternative approach that enables the collection of
samples in a remote location for subsequent laboratory
analysis using remote sampling devices, of which there
are now several types.15
Dr. Michael Snyder, Stanford W. Ascherman Professor
of Genetics at Stanford Medicine, summarized why
microsampling is an attractive approach to profiling
changes in health: “It lets you measure hundreds
to thousands of molecules with good accuracy, it’s
convenient and the samples are collected in a natural
setting – i.e., a person’s home – which should give a
better indication of their true biochemical state rather
than using samples collected in a clinic.
“Microsampling can help science and medicine become
more inclusive, reduce costs for healthcare systems
and help people access important insights about their
health,” Dr. Jennifer Van Eyk, professor of cardiology,
director of the Advanced Clinical Biosystems Institute
and the Erika Glazer Endowed Chair in Women’s Heart
at Cedars-Sinai Medical Center, said. Van Eyk has been
developing MS-based workflows for remote sampling
devices, and comparing their viability to existing clinical
sampling methods, for several years.16
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 41
Snyder and colleagues recently published a strategy
for regularly capturing and analyzing thousands of
metabolites, lipids, cytokines and proteins from 10 μl
of blood.17 “In two proof-of-principle studies, we first
demonstrate the profiling of a dynamic response to
ingestion of a mixed meal shake and discover high
heterogeneity in individual metabolic and immune
responses, and second, we perform high-resolution
profiling of an individual over one week enabling
the identification and quantification of thousands of
molecular changes and associations across ‘omes’ at a
personal level,” the authors said.
Snyder said that his team runs most of his studies using
a microsampling approach now. “I think many, if not
most, studies will be run this way in the future. It will
also be used for home health monitoring,” Snyder said.
A recent review by Protti et al. emphasized how
microsampling technologies hold the potential
to democratize access to high-quality analytical
testing.15 “However, for the very same reasons, to obtain
a transformative effect on a wide range of analytical
applications for both clinical and non-clinical purposes,
these innovative microsampling techniques must first
achieve wide acknowledgment and adoption, which in
turn can only be obtained through consistently effected
commercial availability, as well as regulatory approval
for the intended use, in most countries and regions,”
they said.
“It’s going to be a long road, but I think we have a
responsibility to help people access their data and have
some level of control over their health,” Van Eyk said.
The next era of biomarker
innovation
Biomarker research is evolving rapidly. While the trends
and innovations highlighted in this article are by no
means an exhaustive list, they represent increasingly
precise, scalable and accessible approaches to
understanding human health and disease. As multiomics
integration and decentralized data collection continue
to advance, the future of biomarker discovery holds
great potential for transforming diagnostics, monitoring
wellness and personalizing treatments across a wide
range of conditions.
*Interview completed in March 2025.
** This article is based on research findings that are yet to be peerreviewed.
Results are therefore regarded as preliminary and should be
interpreted as such. Find out about the role of the peer review process in
research here. For further information, please contact the cited source.
REFERENCES:
1. Gajula SNR, Khairnar AS, Jock P, et al. LC-MS/MS: A
sensitive and selective analytical technique to detect
COVID-19 protein biomarkers in the early disease
stage. Expert Rev Proteomics. 2023;20(1-3):5-18.
doi: 10.1080/14789450.2023.2191845
2. Jang HN, Moon SJ, Jung KC, et al. Mass spectrometry-based
proteomic discovery of prognostic biomarkers in adrenal cortical
carcinoma. Cancers. 2021;13(15). doi: 10.3390/cancers13153890
3. Tao QQ, Cai X, Xue YY, et al. Alzheimer’s disease early
diagnostic and staging biomarkers revealed by large-scale
cerebrospinal fluid and serum proteomic profiling. The Innovation.
2024;5(1):100544. doi: 10.1016/j.xinn.2023.100544
4. Ye X, Cui X, Zhang L, et al. Combination of automated sample
preparation and micro-flow LC–MS for high-throughput plasma
proteomics. Clin Proteom. 2023;20(1):3. doi: 10.1186/s12014-022-
09390-w
5. Kverneland AH, Harking F, Vej-Nielsen JM, et al. Fully automated
workflow for integrated sample digestion and evotip loading
enabling high-throughput clinical proteomics. Mol Cell Proteom.
2024;23(7). doi: 10.1016/j.mcpro.2024.100790
6. Pedersen AL, Ernest M, Affolter M, Dayon L. Analyzing various
biological fluids with a single automated proteomic workflow
for biomarker discovery. In: Islam Williams T, ed. Tissue
Proteomics: Methods and Protocols. Springer US; 2025:157-178.
doi: 10.1007/978-1-0716-4298-6_11
7. Boschetti E, Righetti PG. Low-abundance protein enrichment
for medical applications: the involvement of combinatorial
peptide library technique. Int J Mol Sci. 2023;24(12):10329. 2023.
doi: 10.3390/ijms241210329
8. Palstrøm NB, Nielsen KB, Campbell AJ, et al. Affinity-enriched
plasma proteomics for biomarker discovery in abdominal
aortic aneurysms. Proteomes. 2024;12(4). doi: 10.3390/
proteomes12040037
9. Kirsher DY, Chand S, Phong A, Nguyen B, Szoke BG, Ahadi S. The
current landscape of plasma proteomics: technical advances,
biological insights, and biomarker discovery. bioRxiv. 2025.
doi: 10.1101/2025.02.14.638375
10. Gonçalves E, Poulos RC, Cai Z, et al. Pan-cancer proteomic map
of 949 human cell lines. Cancer Cell. 2022;40(8):835-849.e8.
doi: 10.1016/j.ccell.2022.06.010
11. Shulman ED, Campagnolo EM, Lodha R, et al. Path2space:
An ai approach for cancer biomarker discovery via
histopathology inferred spatial transcriptomics. bioRxiv. 2024.
doi: 10.1101/2024.10.16.618609
12. Winchester LM, Harshfield EL, Shi L, et al. Artificial
intelligence for biomarker discovery in Alzheimer’s disease
and dementia. Alzheimers Dement. 2023;19(12):5860-5871.
doi: 10.1002/alz.13390
13. Khan S, Mohsen F, Shah Z. Genetic biomarkers and machine
learning techniques for predicting diabetes: systematic
TECHNOLOGYNETWORKS.COM
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 42
review. Artif Intell Rev. 2024;58(2):41. doi: 10.1007/s10462-024-
11020-w
14. 14. DeGroat W, Abdelhalim H, Patel K, Mendhe D, Zeeshan S,
Ahmed Z. Discovering biomarkers associated and predicting
cardiovascular disease with high accuracy using a novel nexus
of machine learning techniques for precision medicine. Sci Rep.
2024;14(1):1. doi: 10.1038/s41598-023-50600-8
15. 15. Protti M, Milandri E, Di Lecce R, Mercolini L, Mandrioli R. New
trends in bioanalysis sampling and pretreatment: How modern
microsampling is revolutionising the field. Adv Sample Prep.
2025;13:100161. doi: 10.1016/j.sampre.2025.100161
16. 16. Whelan SA, Hendricks N, Dwight ZL, et al. Assessment of a
60-biomarker health surveillance panel (hsp) on whole blood
from remote sampling devices by targeted LC/MRM-MS and
discovery DIA-MS analysis. Anal Chem. 2023;95(29):11007-11018.
doi: 10.1021/acs.analchem.3c01189
17. 17. Shen X, Kellogg R, Panyard DJ, et al. Multi-omics
microsampling for the profiling of lifestyle-associated changes in
health. Nat Biomed Eng. 2024;8(1):11-29. doi: 10.1038/s41551-022-
00999-8
MEET THE INTERVIEWEES:
Irene De Biase, MD,PhD is a professor of pathology at the
University of Utah School of Medicine.
Jennifer Van Eyk, PhD is an international leader in the area of
clinical proteomics. She is professor of cardiology, director of
the Advanced Clinical Biosystems Institute and the Erika Glazer
Endowed Chair in Women’s Heart at Cedars-Sinai Medical
Center.
Qing Zhong, PhD is an associate professor at Children’s Medical
Research Institute (CMRI), The University of Sydney.
Prof. Michael Snyder As a pioneer of Precision Medicine, Snyder
has invented many technologies enabling the 21st century of
healthcare including systems biology, RNA sequencing, and
protein chip.
PROTEOMICS: TOOLS, TECHNIQUES AND TRANSLATIONAL POTENTIAL 43
TECHNOLOGYNETWORKS.COM
CONTRIBUTORS
Aldrin V. Gomes, PhD
Aldrin is a professor and vice-chair of the Department of
Neurobiology, Physiology and Behavior at the University
of California, Davis. He has a PhD in biochemistry and has
authored over 200 publications, including several articles on
the problems and solutions for western blotting.
Alison Halliday, PhD
Alison holds a PhD in molecular genetics from the University
of Newcastle. As an award-winning freelance science
communications specialist, she has 20+ years of experience
across academia and industry.
Isabel Ely, PhD
Isabel is a Science Writer and Editor at Technology Networks.
She holds a BSc in exercise and sport science from the
University of Exeter, a MRes in medicine and health and a PhD
in medicine from the University of Nottingham. Her doctoral
research explored the role of dietary protein and exercise in
optimizing muscle health as we age.
Kaja Ritzau-Reid, PhD
Kaja Ritzau-Reid is a freelance science writer. She holds
a Masters in neurotechnology and a PhD in neural tissue
engineering. She previously worked as a post-doctoral
research associate at Imperial College London, Department
of Materials.
Kate Harrison, PhD
Kate Harrison is a senior science writer and is responsible for
the creation of custom-written projects. She holds a PhD in
virology from the University of Edinburgh. Before working at
Technology Networks, she was involved in developing vaccines
for neglected tropical diseases and held a lectureship position
teaching immunology.
Kristin Burnum-Johnson, PhD
Kristin Burnum-Johnson is a science group leader for
functional and systems biology at Pacific Northwest National
Laboratory (PNNL Her research is dedicated to achieving
transformative molecular-level insights into environmental
and biomedical systems by implementing advanced mass
spectrometry (MS) instrumentation.
Marija Velickovic
Marija Velickovic is a chemist at the Environmental Molecular
Sciences Laboratory (EMSL) on the Pacific Northwest
National Laboratory (PNNL) campus. Her focus is mass
spectrometry-based omics in biological, chemical and
environmental research.
Molly Coddington
Molly Coddington is a Senior Writer and Newsroom Team
Lead at Technology Networks. She holds a first-class honors
degree in neuroscience. In 2021 Molly was shortlisted for the
Women in Journalism Georgina Henry Award.
Sponsored by


Download the eBook for FREE Now!
Information you provide will be shared with the sponsors for this content. Technology Networks or its sponsors may contact you to offer you content or products based on your interest in this topic. You may opt-out at any time.
Experiencing issues viewing the form? Click here to access an alternate version