Next-Generation Sequencing (NGS) has revolutionized both our understanding and utility of genomics, marked by a surge in direct-to-consumer genomics and transformative therapeutic advancements. However, as applications and method developments increase, more gaps are revealed in the NGS landscape, spanning: 1) financial constraints, 2) time considerations, 3) informatics complexities, 4) data management, and 5) ethics/security vulnerabilities.
Sequencing unreliability and non-reproducibility, underscored by a lack of interpretation tools, compounds the challenges of integrating sequencing data into clinical healthcare settings. Additionally, the intricacies of data governance and ethical obligations further interlace with these challenges. Adhering to GDPR standards, ethical concerns cast a spotlight on the delicate intersection between scientific advancement and the imperative need for data protection. How can we do better?
We discuss these profound challenges and strategic solutions with a focus on exome sequencing data, from the perspective of a specialized NGS service provider. We cover how harnessing optimized digital innovations workflows, from enhanced consistency in bioinformatics pipelines and streamlined data management to blockchain tokenizing and GPT, culminates in tangible enhancements to efficiency tailored to specific exome goals.
Quantifiable progress is shown by efficiency gains: cost reductions of 90%, expedited turnaround times reduced by 33%, and improved workflow reproducibility. From this presentation, you will gain insights into the transformative influence of software-driven strategies in exome sequencing.
Hannah Dose is the CEO and Co-Founder of AUGenomics, a premier next-generation sequencing service provider. At the University of Hawai'i, Hannah employed CRISPR/Cas9 technology to combat plant diseases and transmission. Through her work developing affordable COVID-19 testing at La Jolla Institute, she recognized how access to molecular technology needed to be improved. At 27, as a young entrepreneur with a vision of democratizing scientific progress, she is committed to breaking down barriers to innovation and making molecular tools more accessible for all.
Hannah Dose:
All right, everyone, welcome. Thank you for being here. I'm here to talk to you a little bit about how you can overcome some of the biggest challenges in exome sequencing data. My name is Hannah Dose, and I'm the CEO and co-founder of AUGenomics, which is a next generation sequencing service provider in San Diego, California. So, yes, it's a little bit cold here for us. And if you know me, you know that I'm passionate about leveraging genomics to accelerate scientific progress. And that's pretty much what led us to starting AUGenomics. I started AUGenomics with my co-founder and technical brainiac, Suzie Alarcon. Yes. I'll address the elephant in the room. I know it's not every day you see two women founding an NGS company, but you know, just defying conventions is pretty much our mantra. Being underrepresented members of the scientific founder's space are part of what drives our vision for a more equitable and inclusive future in biotech and our commitment to inclusivity goes beyond our own journeys. It's ingrained in every aspect of genomics.
But we didn't set out to become CEOs or trailblazers in the industry. It all started with our shared curiosity about how expanding genomics access can help improve our health, our communities and our environment. My journey started at the University of Hawaii, where I performed CRISPR Cas9 mediated pathogen resistance in plants, namely in basal and cacao, and I crossed paths with Suzie at the next generation sequencing course at the La Jolla Institute for Immunology, which is actually one of the top five immunology institutes in the world. And it's also where Susan still serves as the CORE Director of NGS.
We are winners of the X Prize rapid COVID testing competition, which was a $6 million competition aimed at increasing access to COVID testing in communities that didn't have access to these resources. So, this was early pandemic era. And so, among four other winning teams we developed a super low cost, really sensitive, non-invasive qPCR test that was saliva based. And fortunately, with our winnings, we were able to donate testing reagents and sequencing kits to the country of Nepal during a time where Nepal didn't have access to these resources. And so that was kind of the first time that we just realized that each other were powerhouses that absolutely needed to continue this effort towards democratizing new technologies and diagnostics. And we just we just had to keep moving forward toward increasing access. And so, our journey with La Jolla Institute continued for another fruitful two years, two or so years where we were really fortunate to be able to work with some really remarkable collaborators and researchers there. And we were able to perfect a diverse array of NGS applications you know, everything from RNA sequencing, complex, multi single cell, even spatial sequencing, too. We were able to really delve into the intricacies of each method. And so, by working closely with these researchers and understanding their key pain points throughout the years, we were able to solve some of their biggest problems.
And so, through that, we actually also learned and discovered that there was just this really big gap in the
landscape while these academic or academic labs were able to get this this personalized help from other academic COREs, everybody else was kind of just left wondering or yearning for that extra level of support. And so, we wanted to bridge that that gap and bridge that divide. And so that's why we started AUGenomics. Now, I don't want to be a Debbie downer, but finding problems is my passion. And that might sound kind of weird at first, but finding problems is the first step to finding a solution, right? And let me tell you, believe it or not, there's still a lot of problems in the NGS world. And so, we're going to talk about that a little bit, to talk about some of some of the changes in the market and how those give rise to new challenges with a focus on exome sequencing, in particular, also looking at ethics. and data governance. And then we'll also talk about how we at AUGenomics have found some really great solutions to some of these problems.
Let's look at current trends. So, the market in general is dependent on two main things cost and quality. We all know that sequencing is already expensive and at its core the technology is changing. There are new, new players coming to the market. At AUGenomics, we actually leverage the Element AVITI system. And so, this system has actually been able to have us provide not only better quality for us customers, but also lower cost for them so that we can lower entry barriers and get more research out there. And so, part of that and part of the higher quality is dependent on quite a few things, namely an improved signal to noise ratio with them with their better flow cells. And then also the use of rolling circle amplification, which is basically instead of using the old sequencing by synthesis where, you know, you're doing basically PCR; you're copying off of each subsequent copy that you make and further propagating error bias. Instead, you're using this colony here, which is basically what happens is you? circularize your library and then you're actually copying off of that original circular library each time. So, it's not further propagating error bias. If you get an error that's going to happen one time and then it's not going to be noticed when it's fully in this colony. That also occurs because they use these dye-labeled cores that are bound to multivalent ligands. And so, these ligands are all agreeing in consensus that they're bound to the correct base pair, which otherwise if you just have single, single binding molecules, then errors can happen, and they'll be picked up by your sequencer. Another really cool thing about these, these cores are that it uses a lot less dye than most sequencing, and this dye is a really expensive sequencing reagent.
So that's part of why, you know, even though we're getting higher quality, we can actually be the charging, you know, a lot less for this because the reagents are a lot a lot cheaper. And so, it's just been really, really great for us. Our clients have loved the platform. There's also a demand for single cell single cell sequencing increasing as well, as we all know. But it's still really expensive and that can really affect you know, the research and the reliability of your research in the end. So, you know, if you're paying $2000 to $3000 per sample, you're probably not going to be running triplicates. And if you're having issues with cell number, you don't even have the resolution to know if you're picking up all the positive signals that are there or if your negatives are actually really negative. So, the cost here could actually be a problem in terms of research Long read sequencing, also picking up steam, it is still a lot more expensive than short read sequencing in general. So, it is having a little bit of a slower pickup globally. But the ability to read our genetic code at an unprecedented precision has led us to a new era in medicine, one where, you know, treatments can be tailored to the individual. We're also seeing a rise in early disease detection, such as with the early cancer detection from GRAIL and with the increase of noninvasive sampling, such as with saliva-based collection, we're seeing a rise in direct-to-consumer genomics and then a subset of that is we're also seeing a lot of exome sequencing in the direct-to-consumer space. And so why are we seeing a lot more exome sequencing?
Well, because it's cheaper. That's pretty much it. But what does that mean on a larger scale? It means you're getting more accessibility for a higher number of patients.
Yeah, you might have heard of the $300 or $200 genome. In reality, this does not happen. It's not taking into account the instrument costs, you know, additional reagents needed for extraction and library preparation time labor and even more sequencing for error loss. And so, exome sequencing, on the other hand, is you're only sequencing. 2% of the genome per se. And that's because this 2% is the part of the genome that's actually coding for protein, protein coding regions. And so, this 2% actually still can resolve up to 85% of disease-causing variants. And so, while you're not sequencing all 3 billion base pairs, it covers most of the variation that we care about from a disease perspective. And so, what can we understand through exome sequencing and why might it be important for you to have control over your own genome? Well, I have a story for you, and this has been really instrumental for me personally and also for AUGenomics as a whole. Our friend Daniel Uribe is the founder and CEO of GenoBank.io, which is the world's first decentralized DNA governance protocol. Welcome. And so, when Daniel approached AUGenomics and he was trying to get a partner for his cybersecurity platform here, he got together a cohort of 16 patients. And I didn't know this at the time, but Daniel actually has a child with a rare genetic disorder. And so, he had been wrestling with insurance companies and at the same time wondering, how does this whole process work? You know, do I get to do I get to see this data? Do I get to explore it on my own? And in many
cases and in cases like his, the answer was no. And so, he just felt helpless, like he couldn't really play a role in his in his child's condition. And so, you know, just waiting for the next lab test, he just wanted to add his son sample to the cohort out of curiosity. Well, fast forward a couple of weeks and I got a call from Daniel on the phone, and his voice was a mixture of astonishment and excitement.
Right. And he's like, I didn't find a mutation in the gene that I was expecting. And the news kind of just like hung there in the air. I didn't know what was going on. And so, he explained the story of his of this child. And he went on to say that, so I saw I thought something went wrong in the lab. Right. Maybe there some kind of mistake. Now at the sequencing service company. This isn't something that you want to hear. But he went on to explain that he created this this program inside of his cybersecurity system that allowed him to explore the results of this exome. And so, what it was is basically a combination of public databases and a version of ChatGPT. And this was something that he needed himself. I mean, he's not a biologist, he’s not a bioinformatician, but as a cybersecurity expert, he does know a thing or two about data handling. And so, he created the system and he found something really strange. What he found was a different mutation and a different gene coding for an entirely different disease. And so, what was happening is the pieces of the puzzle were kind of rearranging themselves to a more accurate understanding of his child's disorder at this point. You know, it was just it was really moving just to see how important it was for him to play a role in understanding his child's condition for himself and for his family. And also, what had become evident was the importance of not only creating a platform that where your data is secure and you're owning this data, but also giving this opportunity for you to explore your own, your own genome as well. In this situation, it's your own exome, but just giving you the freedom to do that and better understand your own genetics. So, you might be thinking, Hannah, you brought up security a few times. Why should I care about, you know, whether somebody has access to my exome or who knows, you might be one of the millions of people that haven't done a 23andMe test or a direct-to-consumer test because of this concern. And so, you might be surprised to hear this specifically with 23andMe, this was actually a news story when I was getting materials ready for this talk that there was a user on a forum called Breach Forums. I don't know if anybody's heard of this, but this site is basically a site where hackers can sell data that they steal from other organizations, other companies, governments, whatever. And so, there was a user on there that was selling, selling 1 million user's data from the 23andMe platform, and he was selling them for about $1 to $10 for per user. And so and so this it says are this data is valuable. I mean, he could have made $10 million off of the steal enough of this hack. But the issue of vulnerabilities is the responsibility of these sequencing companies. And so how can we better protect ourselves? Well, we need to have the power to control your own digital assets. And in this case, that would be your own digital DNA.
So, we have found a solution for this through our platform that we are building with GenoBank.io, and also which is part of the end-to-end exome platform that we will be releasing. It's not released yet, but in this we're using tokenization of this digitized exome data into a decentralized ledger, which is essentially a distributed data database that is managed by a network of computers or nodes rather than by a single entity or authority. And so, this is important because this decentralization makes the security system more robust. It's virtually like almost impossible to hack into. And so, another important part of this is the tokenization process. And so, each user is completely anonymized, and then they also have full control over who's accessing their data. So, this is really important for clinical trials and things like that where if you want to give your access to somebody, you can set those limits of who's going to access this when they can access it. And you also get full transcripts over who has accessed this in the past so that you can make sure that your data is being used ethically. So, this concept might be stirring up a lot of thoughts. You know, what other kind of data breaches have there been or, you know, what other ethical gray areas are there? Well, there are a few. There have been a number of DTC companies actually in hot water recently. For example, one company in August of last year, they had their users had a class
certification granted against them because they had sold their genetic information to a third party, completely unauthorized. And one of the concerns of this in my mind was that the company responded and was like, “Oh, well, you know, none of our clients suffered any damages.” So I think that's a little bit concerning. But in general, I have a list of these, but I don't really have time to go through them.
But it does underscore the importance of these companies' taking responsibility for really protecting this sensitive and personal data. And also in AI, you know, nowadays there’s an AI for everything. But how are these algorithms being trained? Unfortunately, a lot of the time they're being trained on data that they don't actually have explicit access to, which is very unfortunate. We're trying to go back now and undo a lot of the wrongs that have been done, but in some cases it's just not feasible and it's not going to happen in forensics, law enforcement, obviously, DNA has been an incredible tool in forensics, but it can be very controversial as well. Like, for instance, for instance, this was last year in New Jersey. The government subpoenaed multiple times, actually the DNA from a newborn screening test. And so, this is a test where, you know, blood is taken from a baby, and it is required by the state to do. But unfortunately, the law just was able to access this and use it to convict somebody that was a relative of the child in a 1996 case. So, they're like going way back to look at that. I don't know. I, I think that the
the root of this issue is essentially in trust. We need to make sure that we can trust our government and our agencies to be handling this ethically and make the decisions of what kind of crimes should permit them to be able to do database searches. It's just interesting to think about when you talk about biodata security. One more thing. I just have a really quick mention. Also, there's concern over international law and especially, you know, adversaries of even in the US, adversaries of the US collecting bio data of US citizens. So, we are doing what we can to protect ourselves from that, like with the GENE Act And so there's no global regulation in general. So, countries have to do what they can to protect their people inside, inside their borders or even outside globally. But for research transactions across international lines, how can we make sure that we are keeping up to date with regulations and that we're doing things legally? Let's take GDPR, for example, which protects those living in EU countries. The reality is it's really the most progressive method so far, giving people control over their own data. But it does have unintended and often problematic consequences not only to these B2C companies but also to publicly funded biomedical research.
Unfortunately, GDPR is super vague and difficult to interpret, and so this leads to many times when there are international collaborators, they have disagreements on whether they're even dealing with personal data or consent rules or when they can and can't trade data over EU lines. And so, for new regulations within GDPR as well, such as the right to erasure of data, which means that somebody wants the data erased, it's taken completely out of the system. And so, these regulations can get complicated if you think about things like clinical studies.
So, it's important to have safeguards in place, just like anonymization minimization of data. And so how can we make sure that we're being compliant? What's going back to our solution with the tokenized, decentralized or tokenization of decentralized data with nonfungible tokenizing in clinical research, it provides that transparency so that your users or your study participants can see exactly who's accessing this data. They can give consent and revoke consent. And any time if they revoke consent, then it's actually taken out of the system automatically, which reduces any other overhead that you would have to do as a as a clinical partner trying to go back and make sure that you're up to date on everybody's consent status once. Oh one. Okay. So that was a lot in the, in compliance. But let's talk about the data in general. So, what is this data even, and how is it processed? What kind of challenges can you see along the way? How do you know if your exome data is actually reliable? I really like this quote, “Distinguishing between a good call set and a bad call set is a complex problem. If you knew the absolute truth, you probably wouldn't be running variant discovery.” And I think this is really interesting. I mean, how can you check your work if you don't know what the truth is? Right? That's very true. What can you compare it to? Well, unfortunately, poor data does exist, and it can be detrimental to a study if you're not getting accurate and consistent variant calls. So, let's talk about gauging reliability first, concordance analysis. I mean, we know that that's basically you are comparing your data to that of a truth that you can use like multiple disease specific control samples with form, like a truth set of known DNA variants with each control sample genotype for a specific pathogenic variant.
But it also depends on why you're doing sequencing and whether or not you actually even have a suitable truth set or even if you know what you're looking for. So, you can also look at variant count and composition. What I mean by composition, basically you're looking at SNPs is you're looking at Indels, you're looking at TiTv ratio and this can give you a good idea so that you can gauge, you know, does this, this data look right? You know, does it look good, does it make sense? And so, it can also tell you a good amount of what diversity, what kind of diversity you have of variants. And so, what does this mean in general? We can think of SNP counts, you should see around 25,000 SNPs in a single individual's exome. However, this isn't always the case. You know, there there's a 2015 Nature paper that explains that different ethnic backgrounds actually show a lot of variability in the SNP counts. One of the interesting things that they showed was that most of the variants that they found were actually rare, which I think is really interesting. But certain backgrounds, like for example, African descendants, they have the highest number of variants in that population. And so, it really can vary. It's not like you're always going to get 25,000 every time. TiTv ratio, on the other hand, is also a good a good measure. So, if the distributed distribution of transitions, transversions were random right, like without any biological influence, you would get 0.5, right? Because you have two purines, two pyrimidines. Likelihood of changing to a pyrimidine is twice as high as another purine. But in the biological context, it's common to see a methylated cytosine undergo deamination to become thymine. And so, what you really see is in whole genomic data, you see a TiTv ratio of around two. And then furthermore, in exome data, you have a higher number of CpG islands per base pair. So, in CpG islands you get a higher concentration of your methylated cytosine. And so, with exome data you would actually get a TiTv ratio of some somewhere around three. If we're talking about structural variance on the other hand, you know CNVs those are usually not good to look at if you're trying to gauge reliability of your exome data. And this is because it can be really variable depending on read depth distribution. And so here you can see a map of all chromosomes in 2008 where they
catalogued the set of CNVs across the human genome. And so, you can see that these are everywhere, right or you can kind of see it, but they are everywhere. Of course there is conserved regions, but these really help us with diversity of populations. But it also introduces, you know, risk for disorders and disease. And so, they are still really important. But unfortunately, in exome data, you're still getting false discovery rates of about 60%, which is really unfortunate. There have been some machine learning algorithms that have worked to help combat against this. Like in DECoNT, they basically use a truth set of whole genome data and looking at the CNV signals from that and comparing that to your exome data so that you can better calibrate whether what you're seeing with your read depth distribution is actually from biology or if it's just from reads.
And usually, the cause of this in exome sequencing is from things like sample batch errors from GC regions and then also from the actual probes themselves that you use to target your exon regions. Okay. So, in general, bioinformatics can be really resource intensive. Imagine you have a team that are all working project by project, right? The potential for inconsistency due to variations in the workflow is a really big concern. Different team members might have different tools, different versions of tools which can lead to discrepancies. Even small differences make it really hard to maintain reproducibility in the results. There are also scalability changes, bottlenecks of longer turnaround times and higher computational cost. Because of it, resources allocated to maintenance could be redirected. You could be developing new projects; you could be pursuing other growth opportunities. So, you really don't want your resources tied up in technical limitations. Tools might not be on strict security protocols as well, so you get higher risk of data breaches, even loss of data entirely. And then on top of
that, you can expose your organization to legal and regulatory liabilities as well. Obviously, integrating changes can be expensive, time consuming and technically complex updates are constant, which further drains that effort. And then different tools can often operate independently. And so, in that case, you might have some tools that are being overburdened while other tools are being underutilized. Even in troubleshooting, as you've done, any bioinformatics is like dabbled in it. You know, there are a lot of tools that don't have very good documentation and it is a pain. I have been a victim of this, and it is my least favorite thing about data analysis. And so, the lack of support out there further compounds all those other issues and just really sucks up a lot of your time.
And so, at AUGenomics, we have found a solution for our own service-based pipeline with the use of the gnome software by Almaden Genomics. So, this is a data analysis software. As a service provider, this has so many benefits for us. It really just like checks all of the boxes. We're saving resources by lowering our time with building pipelines. We're saving on system maintenance, on storage and computational costs as well. Creating workflows is like super easy. It's pretty much you can make a workflow in 10 minutes. It's pretty much you're just dragging and dropping your tools. You're connecting one tool to another tool, and it's just amazing. I have a slide on here. I'll show you how it works. One of the things that really that sets gnome apart from other analysis software is how modular it is. So, you can actually use any public tool that's out there and you can integrate it into the software, or you can actually have their team integrate it for you. And then all you have to do is just drop it into your workspace and it's ready to go. And so, part of that is their support team is awesome. So, no more worrying about package dependencies and compilations, which is the worse my least favorite. And then because it is all cloud based, we save on time so I can be running multiple analyses while I'm here, at this talk. So, I'll give you an example of some exome data that we processed, which yes, this really can take a few minutes to put together. So, let's look at the sub subset of four samples that we have from our exome data. These were collected with Origin X collection kits.
They were prepared using Agilent SureSelectXT the Human All Exon V8 probes, and this was sequenced on the Element AVITI at 50x coverage. So now since we're a sequencing service company, I'm just going to focus on the metrics from this. This is the most important part for us. And so, I'll also talk a little bit about what the difference is with gnome and how we're able to improve our efficiency and really fine tune our workflow to help increase the reliability of our results. So, let's look at SNVs. So, we see about 24,000 to 36,000 SNVs with the gnome pipeline versus the 29,000 to 40,000 SNVs for our piecemeal pipeline. So, concordance analysis from a similar ethic background this data was released last month by Regeneron, Regeneron Genetics Center, published in Nature. We're seeing a higher concordance with what's expected Now, again, why is there a difference between these.
two sets? Well, because gnome was quick and easy to use. We could fine tune our workflow and set better filtering, which was based on different testing parameters. So, we actually had the flexibility to make these changes and quickly test and reiterate. And as you can see, maybe removing a bit more of those lower quality reads can help reduce inaccurate hits. So, we can talk about transition transversion ratio as well. As I mentioned before, we should be expecting around three for a TiTv ratio and we're seeing our ratio close pretty much right in line with this expectation, which suggests that our positive variants are more likely to actually be positive and our negatives are more likely to actually be negative. So, you can see our previous pipeline had a lower transition to transversion ratio on average, which indicates the likelihood that our improved filtering with gnome probably got rid of some of those false positives that we have that we would have otherwise missed. Our coverage, according to our BED file, also shows complete coverage and coverage uniformity across coding regions. And so, this is this is ideal in exome sequencing to ensure accurate variant calling and sensitivity. So sufficient and uniform coverage is important for a robust and sensitive SNV detection, and it can also help you improve CNV results. So, I know I talked about how CNVs are usually not used in exome, but if you're going to use one of those, those ML, ML programs for this, you're going to want to start with even coverage across the board with that. Okay. So, now when I say this process is easy, I actually mean it. From workflow building, integrations and maintenance, this, we saved I mean, we saved over 30 hands-on hours in making this, in comparison to our piecemeal approach, which is absolutely amazing for us. And it means we're getting more time to do other things like working on method developments.
And so instead of a two-week pipeline build 2 to 8 hours a day or a week on workflow updates and additional hours spent on documentation with our bioinformatician, this work, this whole workflow took about 10 hours hands-on for setup. Now, need I remind you; this was the first workflow we ever did. So, I feel like next time is going to be even half, half the time on that That’s 8 hours for updates and then 2 hours for documentation. So, using gnome has really given us our resources and our team back. Not only that, but we actually save an average of $1500 a month using gnome, which, who doesn't like cost savings? And so now I just want to show you a little quick video about what this actually looks like. So, this is what it's like when you're using Almaden and so you can see it’s really easy. So that was just BWA index that I pulled in as a tool. So, it’s basically just drag and drop. You just connect it to your input; make your output and you can set your parameters right here. That's how easy it is. And then you can index your reference. That was, I think that was like a 20 second time for that workflow. And so, you can imagine how quick it is to build full workflows. You just drag each of your programs that you need, link them up and you're good to go. And so, it can make a huge improvement to your workflow because you're maximizing your reproducibility as well. You know that every time that you run this, it's going to be run exactly the same way every time, no matter who on your team is running the workflow. And so, it also saves you resources. You know, you're not spending so much time on this and it just makes your pipelines and integrations easier. Okay, so now I mentioned some of the best solutions we've found in sequencing. Now you can integrate these into your own in-house sequencing capabilities, or you can send your samples to a trusted provider like us. AUGenomics has witnessed, what we've witnessed firsthand, the impact that all of these challenges can have on your scientific progress. And that's why we've designed our services. We're all about taking the hassle out of your sequencing projects at a cost-effective price. We de-risk your projects by providing tailored solutions to help alleviate some of these problems and help you focus on what matters most to you, which is your results and your research. We are a trusted partner with academia, industry and also government agencies. We work closely with each of our customers, whether they're developing new methods or if they just want quick turnaround sequencing for libraries that they've already prepped by outsourcing your sequencing projects to us, you can get access to expertise, you can get cutting edge equipment, and then also optimize workflows without bearing that full cost and full time associated with running your own in-house capabilities. And we get it, you know, some sample type for pain to work with. And you know, anyone that's done, trying to think off the top of my head, even like epithelial extractions and that kind of thing.
We know what it's like and those difficult samples are actually kind of our bread and butter, kind of our specialty. And so, we know that not every provider out there is the same. We've seen the aftermath of clients going to maybe a different option, and they've lost all their samples and or they've spent a lot of money and got really bad results that we're going to end up redoing for them. So that's why they choose us. With AUGenomics you can rest assured that your samples are in good hands. But so, what do we do exactly? So, we do experimental design and optimization through our consulting services. Fun fact this started as a consulting company. We're experts in method development and validations. So, if you want to try something that's never been done before, we’re your
partner, we're there for you. We perform extractions, library preparation and of course the main event which is our sequencing. We're really known for our quick turnaround times. Anything under one week for sequencing, sometimes two or three days. And so, our whole vision is just to transform life sciences by making it easier for you to get access to high quality sequencing technologies and saving you time and resources Okay. So, I just want to take the time really quick to announce a grant that we are sponsoring with Element Biosciences. This grant is providing a free sequencing project for up to 32 samples through the AVITI Accelerator Grant. This is part of our efforts to help lower entry barriers for transformative research. And so, 32 samples can really add up to a lot. So, if you're if you're a researcher or if you're a lab, a student, whatever, I really highly recommend you apply. The announcement for this was only two days ago, so you have plenty of time. Applications are due on December 15th, and we'll be looking at several factors research plan, scientific interest, feasibility and then also the ability to complete this project within a one-year time frame. And so, the selected winner will get free sequencing services for this project, everything from extraction, library preparation, sequencing itself. And then if you'd like to learn more or submit an application, go ahead and scan the QR code and it'll take you to the grant page. All right. That's all I have for you today. Thank you for joining me.