Happy New Year evreryone! We released a computer vision and geomodel update v.2.28 today with114,277 taxa compared to 112,613 taxa in thelast model and is trained off data exported on December 28, 2025.
This model is the result of millions of observations and identifications shared on iNaturalist. As new taxa are observed and identified, taxa (mostly species, but also genera, families, etc.) are added to the model once there are around 100 photos and 60 observations of that taxon.
As taxonomy changes and misidentifications get corrected by the community, sometimes taxa are also removed from the model. To keep up with these changes, we update the model every month or two so that the community can benefit from the improvements. Check out our help pagefor the most updated information on how we update the model.
Below are links to the taxa added since the last model. You can click through and search for your username to see if you have observed or identified any of these species. If so, thank you!
The graph below shows how the number of taxa included in the model has grown over the last few years — more than doubling from 55,000 in 2022 to over 110,000 today.

Each time we release a new model, we evaluate it against the previous one.
The graph below shows model accuracy estimates using 1,000 random Research Grade observations in each group not seen during training time. The paired bars below compare average accuracy of model 2.27 with the new model 2.28.


We're launching our sixth Observation Accuracy Experiment today! This experiment follows the recentID-a-thon (December 15, 2025 - January 15, 2026), which brought thousands of new identifiers into the community and reduced "Unknown" observations by about 34%. This post-ID-a-thon experiment will help us understand whether that surge in identification activity maintained data quality — and ideally, improved coverage and reduced uncertainty.
For those new to these experiments (or joining us for the first time), here's an overview on how we measure observation accuracy (for a more detailed look at methods, read through themethods section for Experiment v0.6):
From the iNaturalist database (A), we generate a random sample of 10,000 observations (B), and assign subsamples to qualified validators (C). Validators are selected based on their expertise: if an identifier has made at least 3 improving identifications on a taxon in a country, we consider them qualified to validate that taxon within that country.
Improving identifications are the first suggestion of a taxon that the community subsequently agrees with.

We email validators with a link to their subsample and ask them to identify each observation as best they can by the deadline. Instructions are simple: add the finest identification you can based on the evidence, even if that's just a broad ID like "Plants" or "Insects."
We score observations asCorrect,Uncertain, orIncorrect by comparing validator identifications to each sample observation's taxon:

We calculateAccuracy as the percentage of correctly identified observations in the sample (including subsets like Research Grade observations), as well as the percentage Uncertain and Incorrect. We also measurePrecision based on taxonomic specificity — finer IDs (like species-level) score higher than coarse IDs (like kingdom-level).
Our last experiment (v0.5) ran in early December 2025, just before the ID-a-thon kicked off. Here's what we found for Research Grade observations:
The 1% error rate continues to align with earlier experiments — excellent news for data quality. However, the 8% uncertainty was higher than in previous rounds (typically around 3-4%). This likely reflects that we contacted about 1,000 fewer validators in v0.5, even though the response rate remained solid at 60%.
Important note: "Uncertain" doesn't mean "wrong." It often indicates observations we couldn't get reviewed by qualified validators within the time window, not that the existing ID was questionable. Many of the ~450 RG observations in this category are almost certainly correct, they just weren't matched to a validator under our criteria.
In contrast, only 76 RG observations (about 1%) were determined to be incorrect or too finely identified based on the evidence.
This post-ID-a-thon experiment will help us answer a key question: Did the ID-a-thon improve data quality?
We're hoping to see:
The ID-a-thon also tackled many observations stuck at broad taxonomic levels, which could influence precision scores in interesting ways.
We've been reading your feedback carefully from v0.5, and your comments are directly shaping what we're hoping to improve in future experiments:
The changes aren't solidified yet, so if you have additional ideas,please share them in the comments below. Your expertise helps us refine these experiments and make them more valuable for understanding iNaturalist's data quality.
Based on questions from previous experiments, here are some helpful clarifications:
Q: Where can I see the results of this experiment?
A: Results will be updated dailyhere. At the end of the experiment, we’ll also add an update to this blog post with the final results.
Q: Can I use field guides, books, or online resources when making IDs?
A: Absolutely! Use whatever resources you need to make the best-informed identification possible.
Q: What if the observation already has multiple confirming IDs at Research Grade?
A: That's fine — add your ID anyway. We know this creates some redundancy, but it helps us assess the accuracy of the identification process.
Q: What if I get an observation I’ve already added an ID to?
A: You can skip observations where you’ve previously added an ID if that ID is still relevant.
Q: What if I received one of my own observations to identify?
A: Skip your own observations if the ID you previously added is still relevant. If you didn’t initially identify your observation, or you can refine or correct your original identification, go ahead and add an ID!
Q: The observation has a private or obscured location and I can't identify it beyond a broad level. What should I do?
A: Add the finest ID you can based on the available evidence. If that's just "Insects" or "Plants," that's perfectly okay and still valuable data.
Q: Should I add a comment explaining I'm participating in the experiment?
A: It's not necessary (and creates extra notifications for other identifiers), but if you want to add one, keep it brief. Some observers appreciate knowing why their older observations are getting sudden attention.
Q: What if the observation can't be identified more specifically due to photo quality or life stage?
A: Add the finest ID you can confidently support with the evidence shown. There's no penalty for broad IDs when that's what the evidence supports. It's always okay to skip observations you're unsure about.
If we contact you as a potential validator and you choose to participate, thank you so much in advance. These experiments wouldn't be possible without your time, expertise, and dedication to improving iNaturalist's data. Every identification you add during the experiment helps us better understand the quality and reliability of one of the world's largest biodiversity datasets.
Thanks everyone for participating in Experiment v0.6! The experiment is still running through 2026-02-12, and so far ~87% of the sample has been validated.
It’s difficult for us to predict in advance which observations people will feel comfortable identifying. In earlier versions of this experiment, we selected validators based on whether they had added at least three improving identifications to a given taxon in the past year, regardless of location. Based on feedback that people are generally more confident validating taxa from places they’ve identified before, recent experiments have used a stricter criterion: at least three improving identifications for that taxon within the same country.
While this approach likely results in samples that are a better match for each validator’s expertise, the downside is that it significantly reduces the pool of eligible validators. In this experiment, 706 observations (about 7% of the sample) were not assigned to any validators due to these stricter criteria. This means that even distinctive, relatively easy-to-validate observations—such as this signal crayfish from Denmark—will go unvalidated simply because no one on iNaturalist has added three improving identifications of that species in Denmark in the past year.
In future experiments, we may adjust the approach to first try to find validators “in country,” but then top off with validators who have relevant taxon expertise outside the country when needed. For this current experiment, if you’d like to receive an additional sample consisting of taxa you’ve added three improving identifications to outside of the country, please add your login to the form below and we’ll generate a second batch for you before the experiment ends. As always, please add the finest identification you can—recognizing that this second sample may include observations from places you’re less familiar with.
If you’re interested in receiving a second sample, please list your login here
And thanks again for your participation in helping us better understand data quality on iNaturalist!
Thanks again to everyone who participated in this observation accuracy experiment theresults are now out!. We intentionally used the same methods asour December experiment so we could isolate the impact of the recentID-a-thon. Because the ID-a-thon focused primarily on coarsely identified observations (e.g., moving observations from unknown to order, family, etc.), we expected any effects to show up mostly in the verifiable subset of observations (Needs ID + Research Grade), rather than in Research Grade alone. That’s broadly what we saw. Among verifiable observations, IDs are now slightly more precise (precision increased from 78% to 79%) and slightly less incorrect (the inaccuracy rate decreased from4% to3% while the fraction of observations we couldn’t confidently vet declined). These changes are small and may not be statistically significant, but they’re moving in the right direction.
For Research Grade observations specifically, the results were essentially unchanged, withan incorrect rate of ~1%. We were not able to reduce the fraction of the sample we couldn’t vet below ~9%. This is largely driven by our validator selection criteria: for about 9% of the sample, we couldn’t identify any candidate validators who met our threshold (e.g., having made >3 improving IDs for that taxon in that country). One limitation we uncovered is that our criteria excluded people identifying their own observations. For some taxa—especially those that are almost always posted with correct species-level IDs (e.g.,Polar Bear)—this means there may be no users who have added “improving” IDs, only supporting ones. In future rounds, we plan to loosen these constraints by allowing self-identifications and relaxing the in-country requirement when we don’t have enough candidate validators.
Looking ahead, we may also focus future experiments more narrowly on Research Grade observations. The core question we’re trying to answer is about the accuracy of Research Grade data and how it varies across taxa and geographies. Restricting the sample to Research Grade would give us larger sample sizes per group and more statistical power to explore those patterns. In parallel, we’re also starting to pool results across all of our experiments to better understand broad trends. For example, when we pool across experiments, taxa, and continents and group Research Grade observations by how many total observations a species has on iNaturalist, a clear pattern emerges: accuracy is strongly related to how commonly a taxon is observed. Species with >1,000 observations (e.g.White Tiger Butterfly) have average accuracies in the low 90s; those with 250–1,000 observations (e.g.Cloud-forest King) are in the high 80s; those with 100–250 observations (e.g.Malayan Crow) are around ~80%; and rare species with fewer than 100 observations (e.g.Antillean Clearwing) drop into the ~60–70% range. (These estimates are conservative lower bounds, since “uncertain” cases are treated as incorrect in this summary.)

This relationship likely helps explain why an overall Research Grade accuracy in the 90% range can “feel” high to many identifiers. iNaturalist now includes over 500,000 species, but nearly 20% of all observations come from just the top 500 species. Any random sample of observations is therefore heavily weighted toward common, relatively easy-to-identify taxa. By contrast, identifiers who spend most of their time in narrower or more difficult groups—say, a particular lineage of beetles with far fewer observations—may experience much lower apparent Research Grade accuracy in their day-to-day work. As we continue running and refining these experiments, we’ll be able to build a more nuanced picture of how accuracy varies across iNaturalist and use that to guide strategies for improving data quality over time.
Thanks again for all the time and care you’ve put into participating in these experiments—we truly couldn’t do this without the community.


The iNaturalist platform is built on two things: a passionate community and high-quality data. We've set our product goals for the first half of 2026, and we're excited to share our roadmap, organized under two of the core pillars that drive our work: People and Science.
Among our highest priorities is improving user experience in our flagship iNaturalist app. We are overhauling key areas of the new iNaturalist app:
We'll also be making some improvements on the iNat website for both new users and our committed site curators and project owners:
Behind the scenes, we're also constantly working to keep the iNat platform up and running. Last year, we had better than 99.5% uptime, and we aim to match that in 2026. We're also working to substantially reduce traffic from search/scraper bots to our website and Network Sites. While some automated traffic is inevitable on today's internet, finding ways to cut down bots will help us reduce costs and let us focus on other site improvements.
Our work under the Science pillar is all about strengthening the core of iNaturalist as a tool for science. In the first half of 2026, that means supporting our dedicated community of identifiers.
In Decemberwe surveyed identifiers on iNaturalist, and we came away with three key findings that we're working to address:
Identifiers spend a significant amount of time correcting incorrect species-level IDs driven by overly confident computer vision (CV) suggestions.
We’re going to improve our computer vision-power suggestion system toreduce how often people select incorrect species-level suggestions. We’ll do that by adjusting the model’s precision–accuracy tradeoffs and how suggestions are presented across the platform. This is a seemingly small change that we expect to have outsize impact. We’re still working out the implementation details, but we expect this to substantially reduce erroneous species-level IDs from our computer vision system.
One of the most common frustrations among identifiers is the high volume of observations with insufficient or low-quality evidence.
We will be testing and iterating on methods tocreate photo tips that help observers take more identifiable photos. In a limited prototyping phase, we'll generate Photo Tips based on comment data. We're not sure yet how far we'll get, but we're working hard on finding ways to improve the quality of photos, to make identification easier and more accurate.
Many identifiers are concerned that there simply aren't enough experienced people to keep up with the workload.
We're going to invest in making it easier for expertise to be shared by helping more people discover the wealth of information we already have within existing ID remarks on the website.
So, we're working on a new search system tohelp people find and sort through useful identification information while ensuring that human expertise and input remain central. We'll be using insights fromour explorations in 2025 to test relevance sorting of comments, but we won't be summarizing or altering identifier comments. This will initially be rolled out incrementally, starting with a core group of testers on the website.
Thank you for all your contributions to iNaturalist over the years. We are excited to keep improving the platform in service of the community and look forward to sharing our progress with you throughout the first half of 2026!

We recently crossed a milestone together on iNaturalist: 4 million observers worldwide have documented biodiversity on the platform. While millions of people use iNaturalist for a wide range of purposes each month, only 4 million have taken the step to actively share what they find.
Now, over four million people have posted a verifiable observation. Four million observers means that each person noticed something, got curious, and decided to share it.
Together, this community has created nearly 300 million observations that help scientists monitor ecosystems, find new species, and understand how biodiversity is adapting to our changing planet.Your observations have contributed to almost 7,000 peer-reviewed papers and informed grassroots conservation projects around the world.
To see what geographic distribution of observers may look like, we took a random sample of about 8,000 unique users who've been active in the last two years and tracked where they made their first observation (based on when they observed it, not when they uploaded it). Here's what we found, with a ±1% margin of error:

It would also be interesting to look at a subsample of the million newest iNaturalist observers and see if that differs from this distribution. We’re looking forward to finding ways to continue welcoming people into the iNaturalist community from around the world!
You all are proof that being a naturalist starts with leaning into curiosity and connection, and is all about noticing the nature near you and wanting to understand it better. Thank you for being part of this effort, and for helping one another connect with nature and each other in meaningful ways!


This past December we asked prolific identifiers on iNaturalist (defined in this case as those who have at least 2,000 active identifications for other users) to take a survey about their experience identifying observations. Nearly 2,500 people filled out all or part of the survey (thank you!), and we wanted to share some of the results here. All responses were anonymous.
This is our first survey of this kind, and will be part of an ongoing effort to better understand the iNaturalist community and how we can continue to support you. Below are some takeaways followed by the survey results and what to look forward to in the future. We’re interested in constructive feedback about the survey itself as well as results.
There are some strong signals from the responses to this initial survey.
Respondents were asked to pick 1-3 of the following responses (which were ordered randomly for each person):

| Rank | Reason | Votes |
|---|---|---|
| 1 | To learn and become a better naturalist | 1522 |
| 2 | To ensure data accuracy | 1311 |
| 3 | To give back to the community | 1279 |
| 4 | I find it intrinsically motivating | 1264 |
| 5 | To help others learn | 957 |
| 6 | For my own research | 428 |
| 7 | Because no one else can identify these taxa | 257 |
| 8 | Other | 64 |
Respondents were asked to rate on a scale from 0 to 10, with 0 being “Not enjoyable” and 10 being “Very enjoyable”.

| Rank | Rating | Votes |
|---|---|---|
| 1 | Eight | 801 |
| 2 | Nine | 476 |
| 3 | Seven | 454 |
| 4 | Ten | 452 |
| 5 | Six | 165 |
| 6 | Five | 77 |
| 7 | Four | 33 |
| 8 | Three | 15 |
| 9 | Two | 4 |
| 10 | One | 0 |
| 11 | Zero | 0 |
This was a free text question, and we received over 1,100 responses. Based on reviewing the raw responses, several large themes emerged. Below are some of them, with a representative response in quotation marks.
In this question we asked respondents to rank the following ten reasons from most difficult to least difficult. Each reason was randomly sorted for each person. Below is a list ranked by mean placement. The lower the mean ranking, the more difficult the respondents rated it.
| Rank | Reason | Mean |
|---|---|---|
| 1 | Too many observations have poor quality evidence | 2.89 |
| 2 | There are too few identifiers on iNaturalist | 4.09 |
| 3 | Too many observations to identify | 4.67 |
| 4 | Identifications are not weighted by expertise | 5.17 |
| 5 | Too many Computer Vision identifications to correct | 5.46 |
| 6 | It takes too many identifications to overturn a wrong identification | 5.54 |
| 7 | Identify page is too slow | 6.35 |
| 8 | Not enough filters for the work I want to do | 6.39 |
| 9 | Some users opt out of Community ID | 6.54 |
| 10 | Other | 7.9 |
Numbers 2 and 3 were probably too similar here. While we think there are some differences in how they may be read, we think they both point to the same difficulty: identifiers feel overwhelmed by the number of observations they feel they need to identify.
Respondents could choose as many options as they wanted. The options were randomly sorted when shown to the respondent.

| Rank | Word/Term | Votes |
|---|---|---|
| 1 | Nature lover | 1850 |
| 2 | Naturalist | 1804 |
| 3 | Curious | 1514 |
| 4 | Outdoors enthusiast | 1506 |
| 5 | Photographer | 1249 |
| 6 | Researcher or scientist | 1093 |
| 7 | Bird watcher | 985 |
| 8 | Student | 623 |
| 9 | Gardener | 562 |
| 10 | Educator or teacher | 494 |
| 11 | Land manager | 177 |
| 12 | Other | 157 |
Respondents could choose one option.

| Rank | Year Joined | Number of Responses |
|---|---|---|
| 1 | Before 2020 | 1009 |
| 2 | 2020-2023 | 928 |
| 3 | 2024 | 147 |
| 4 | Don’t remember/unsure | 118 |
| 5 | 2025 | 74 |
Respondents could choose one option.

| Rank | Number of Identifications | Number of Responses |
|---|---|---|
| 1 | 2,000-4,999 | 570 |
| 2 | 20,000-49,999 | 404 |
| 3 | 5,000-9,999 | 392 |
| 4 | 10,000-19,999 | 390 |
| 5 | 100,000+ | 252 |
| 6 | 50,000-100,000 | 227 |
As might be expected, there was a wide range of responses here, but also some clear themes. Here are some, followed by representative quotes.
Look for our product roadmap blog post next week, which will include information about product improvements that address some of these issues. In addition to product changes, we’ll continue to improve messaging and engagement with both new and existing users, and create more help documentation in 2026.
Thank you so much to everyone who helps out others with identifications, and to those who took the time to fill out this survey. We plan to conduct more surveys of the wide range of iNaturalist community members in 2026 to get a better understanding of where things stand and where they can be improved.
If you have constructive feedback about this survey or surveys in general, please let us know! You can discusshere on the iNaturalist Forum, orsubmit a Support Ticket directly to us. We’ve closed comments on this blog post because it’s difficult to have a detailed conversation just on blog post comments.


Our Observation of the Week is thisVelvet Mite (in the genusMesothrombium), seen inAustralia by @jeremeyhegge!
“In my younger years, I would have preferred to be at home playing Nintendo and found the forest pretty boring,” recalls Jeremy Hegge. However, that did eventually change. “but in my late teens and early twenties I started to really appreciate the natural world. I got a little addicted to be honest and tried get out hiking once or twice or thrice a week.”
In 2019 Jeremy got into observing and studying mushrooms, which are currently his main area of interest.
We know so little about the likely tens of thousands of undescribed mushroom species across this huge continent. There is unfortunately little funding given to fungal research + a limited ability to study them at University in Australia which doesn’t help the situation.
The sub-tropical and tropical regions of Australia are of particular interest to me because there are constantly new genera and species being documented (the tropical regions really need more people active taking photos of mushrooms for iNaturalist!). But my favourite habitat isNothofagus rainforest – cool and mossy and ancient <3
However, it was on a trip to remnant “Big Scrub” subtropical lowland rainforest with fellow iNatter@cajqld that Jeremy photographed the furry mite you see here.
The conditions were hot and dry but you can often still find tiny mushrooms even when it is dry. The tiny ones are particularly poorly documented – so our eyes were looking intently for little details in the understory.@cajqld actually spotted the fuzzball. I thought it was really cute and tried to take some nice photos – although taking photos of moving objects isn’t my speciality 😂 It was by far the largest velvet mite that I have ever seen (most are very small if you didn’t know).

I reached out to acarologist@owen_seeman, who identified this observation to the genus level, for some information about velvet mites, and here’s what he wrote:
Mites - the smallest of all arthropods but the largest group of arachnids - don't get a lot of love, with few exceptions. The greatest exception of all are Velvet Mites, the plush toys of the mite world. These belong to a huge group of mites calledParasitengona, which also includesWater Mites and Chiggers. All these mites (again, with exceptions) do the same thing: larvae find a host on which they engorge, they then drop off and have a pupa-like resting stage, become a nymph, rest again, then become an adult. The nymphs and adults are predators with terrestrial forms having a liking for arthropod eggs. Engorged larvae look like blobs, but nymphs and adults are beautiful, sometimes stunningly so as in the case of Mesothrombium.
Mesothrombium itself is the cuddliest of all velvet mites, with a dense pelage of long hairs covering its body. They comprise 8 species that are found inAustralia andNew Caledonia. Curiously, despite numerous photographic records of nymphs and adults, we don't know anything about the larva. This is because nobody has matched larvae and adults, which is done by rearing the mites (the old way) or molecular matching (the new way). Like most Parasitengona, they're probably not very fussy about hosts, and at best larvae might target an order or a bigger family of insects.

Jeremy (above) joined iNat in 2021 but tells me he’s only used it extensively starting in 2023.
Whenever I see something interesting, I’m always trying to take photos for iNat! iNaturalist encourages me to focus on all of the amazing life that surrounds us. There is so much beauty and mystery still left on the planet.
P.S. I really appreciate all the knowledgeable people who help me out with ID’s.
- you can follow Jeremy onInstagram and check out hisschedule of mushroom walks!
- speaking of fungi and Australia,check out our profile of@sofiazed1, a top fungus identifier for the country!
- take a look at themost-faved velvet mites on iNat!


From December 15 to January 15, we invited the iNaturalist community to try something a little different: anID-a-thon! This was a focused period dedicated to making identifications, especially for people who had never tried identifying before. Hopefully you were inspired to make an ID for the first time, to try to ID a new group of organisms, or just to continue working on the groups and places you normally ID!
The collective impact of the ID-a-thon added up:
Instead of emphasizing speed or competition during the ID-a-thon, we highlighted approachable activities designed to help people get started and build confidence. These numbers may look incremental on their own, but across millions of observations, they are a real and lasting improvement to the dataset.
One of our favorite outcomes of the ID-a-thon didn’t show up in the graphs.
Many of you set your own personal goals, like identifying a certain number of observations, or tackling specific taxonomic groups or places. Several people even shared these goals in the forum and in the comments on ourID-a-thon kickoff post … and then came back later to celebrate reaching them.
That kind of self-directed motivation —and the willingness to share progress publicly— is exactly the sort of community energy we hoped to spark!
If you’re new to identifying on iNaturalist, the ID-a-thon doesn’t have to end here. The resources on theID-a-thon page are always available, and they’re designed to help you keep building skills at your own pace. Every identification helps move observations forward and makes it easier for others to contribute.
We also plan to continue our webinar series about helping with IDs on iNaturalist. We started off with the basics during thefirst webinar back in November, but subsequent webinars will start diving into other activities and features for identifying.
We also want to take a moment to recognize the regular identifiers who quietly power so much of iNaturalist. Only about 12% of iNaturalist users have ever made an identification for someone else, and the number of people who identify consistently is even smaller. Yet this part of the community has an outsized impact, turning observations into data that can be used for conservation and research, and helping other iNaturalist community members learn more about the nature around them.If you’re one of our regular identifiers: THANK YOU!
Our hope is that some of the people who tried identifying for the first time during the ID-a-thon will continue to build confidence, find groups they enjoy identifying, and eventually join the ranks of these regular contributors. iNaturalist works best when people with different levels of experience all participate together.
We’re planning to run the ID-a-thon again — at least annually — and we’d love your help shaping what comes next. In the comments, let us know:
Whether you made your very first ID during the ID-a-thon or have been identifying for years, thank you for being part of what makes iNaturalist the unique, collaborative, and wonderful community it is.

We’re proposing a change that wouldprevent observations from being labeled at the subspecies level unless there is community support at that rank. To help make this concrete, we’ve set upa temporary demo where you can try the proposed alternative and vote on whether you prefer it to the current behavior.
One challenge we consistently face is explaining subtle changes to complex parts of iNaturalist — both how things work today and how we’re proposing they could work instead. Text and static illustrations often fall short, and by the time a change is deployed, it’s usually too late to gather meaningful feedback or prepare the community for what’s coming.
We’re using our ongoing work to improve how identifications determine an observation’s label and quality grade as a test case.Last fall, we fixed a bug that allowed some observations to reach Research Grade at ranks where they didn’t have community support (for example, a subspecies-level label with support only at the species level). While that fix addressed the bug, it also introduced new edge cases — most notably making it harder in some situations to reach Research Grade without adding subspecies identifications that many identifiers would prefer not to make. Many identifiers feel that if an observation has support at the species-level it should be Research Grade at that level, even if there are leading subspecies identifications - here, we are demoing that alternative.
This demo simulates how identifications currently interact to determine an observation’s label, alongside a proposed alternative approach. You can experiment with adding and removing identifications and the demo will simulate the impact on the observation label and quality grade. We encourage you to explore both and vote on which you prefer.
We will keep this demo open for feedback for 2 weeks, then assess the feedback. If our proposed solution is sound, we'll deploy it within 4 weeks after that. If our proposed solution doesn't seem to work, then we'll put our thinking caps back on and report back when we have another approach.
Thank you for taking the time to explore the demo and share your feedback!
Thanks everyone for the thoughtful feedback and for trying the demo—this thread has been incredibly helpful. We’re going to keep the demo open through the end of the feedback window and spend the long weekend digesting the themes (especially around subspecies visibility/search, RG-at-species vs subspecies workflows, and ID-order edge cases). Please keep voting and sharing concrete examples—links are especially useful.
The survey is now closed, but the demo will remain open to explore. The vote came out 200 in favor of the alternative vs 67 for the current behavior. While there’s interest in the alternative, the discussion here makes it clear there isn’t enough consensus on a direction yet. We’re going back to the drawing board to think through additional options and tradeoffs, and we’ll report back here soon with next steps.


Today, more than 290 million iNaturalist observations document life on Earth — each one a personal experience with nature, and together supportingnearly 7,000 scientific papers and ongoing conservation efforts around the world.
iNaturalist is guided by three interconnected goals: connecting people with nature, advancing biodiversity science, and protecting habitats and at-risk species.
As we begin a new year, thank you. This community’s curiosity, knowledge, and care for nature help make it possible to deepen our shared understanding of the natural world and help protect it for the future.
See iNaturalist’s Global Year in Review
Last year, iNaturalist brought people closer to the nature around them, expanded our understanding of biodiversity across the world, and informed on-the-ground efforts to protect habitats and at-risk species. Here are just a few examples from 2025.

Finding friends (and love) through community science In case you missed it,this Washington Post story highlights how iNaturalist community members are finding friendship (and sometimes even romance) through sharing observations and organizing real-world meetups. It's a lovely reminder that when we pay attention to nature, we often discover meaningful connections with each other, too. | Celebrating 10 years of the City Nature Challenge A decade ago, Los Angeles and San Francisco started a friendly competition to see who could document more urban biodiversity. That spark grew intosomething extraordinary — in 2025, over 102,000 people across six continents documented 3.3 million observations of nearly 74,000 species in just four days. |

iNaturalist as research infrastructure iNaturalist now contributes the most data on the widest diversity of species globally to the Global Biodiversity Information Facility (GBIF). Astudy published in 2025 and covered inThe New York Times shows how these data are increasingly used in assessments, conservation planning, policy, and more — every observation can feed into real-world decisions. | Hundreds of new-to-science species descriptions, powered by community science This year, astudy highlighted several new plant species described since 2022 that started with community science. So much discovery depends on experts engaging directly with community photos and IDs — and so much potential is still waiting in the backlog. A few other new species described this year include theWoolly Devil,new man-o-wars,this stunning wasp-mimicking flower fly, and arare orchid. |

Rediscoveries and first sightings Around the world, iNaturalist records are surfacing species that were presumed extinct or missing for decades. Some highlights from this year includethis gorgeous sea slug not seen since its first documentation in 1864, thegreater chestnut weevil (presumed extinct until recently), and astriking orange flower photographed alive for the first time. See even more highlighted in this2025 TED Talk. | Building species lists for conservation efforts People used iNaturalist to document the remarkable diversity of species worldwide — from wetland organisms inNigeria to flora ofMongolia, plants inMadagascar to bees inCuba, and beyond. Creating these lists is a critical first step in protecting and stewarding local ecosystems. |
These stories are just a handful that show what’s possible when this community comes together — and looking ahead, we’re going to keep building on the foundation.

In 2026, our mission remains the same: to connect people with nature and advance biodiversity science and conservation.
We believe in creating tools thatenhance human experiences and engagement, supporting everyone from expert naturalists to newcomers just beginning to explore the natural world. The more people who document nature around the world, the better our understanding of biodiversity becomes.
When someone shares their first observation, they’re not just documenting a sighting; they’re contributing to a global dataset that powers conservation decisions. When an experienced naturalist adds an identification, they’re not just helping one observer learn; they’re making critical data more valuable for researchers, land managers, and more. Our goal is to improve the platform to better support what makes iNaturalist unique: the community.
As always, thank you for everything you do for iNaturalist! Keep an eye out for more details about our 2026 plans soon.

Happy New Year evreryone! We released a computer vision and geomodel update v.2.27 today with112,613 taxa compared to 111,435 taxa in thelast model and is trained off data exported on November 16, 2025.
This model is the result of millions of observations and identifications shared on iNaturalist. As new taxa are observed and identified, taxa (mostly species, but also genera, families, etc.) are added to the model once there are around 100 photos and 60 observations of that taxon.
As taxonomy changes and misidentifications get corrected by the community, sometimes taxa are also removed from the model. To keep up with these changes, we update the model every month or two so that the community can benefit from the improvements. Check out our help pagefor the most updated information on how we update the model.
Below are links to the taxa added since the last model. You can click through and search for your username to see if you have observed or identified any of these species. If so, thank you!
The graph below shows how the number of taxa included in the model has grown over the last few years — more than doubling from 55,000 in 2022 to over 110,000 today.

Each time we release a new model, we evaluate it against the previous one.
The graph below shows model accuracy estimates using 1,000 random Research Grade observations in each group not seen during training time. The paired bars below compare average accuracy of model 2.26 with the new model 2.27.
