Hello!
In recent months, artificial intelligence (AI) tools designed for scientific research have made remarkable strides. Almost every day, new articles announce innovative tools aimed at enhancing technical, administrative, and creative tasks. In my own work, I have started to integrate some of these tools, giving me a practical perspective on their benefits and the positive effect they have on my productivity and the quality of my output. Therefore, in this post, I would like to share some of the tools I have been using lately that I find particularly valuable for my work.
1. So many large language models!
2. Speeding up literature reviews
So many large language models!
The quest to create the most realistic, intelligent, and valuable AI assistants has prompted many companies and teams worldwide to develop and release various large language models (LLMs). These models often vary in capability, which is typically correlated with their size, ultimately affecting their performance and throughput. Established organizations like OpenAI and Anthropic offer subscription options for users seeking access to the latest and most advanced models, along with priority usage. In contrast, free-tier users generally have access to more limited versions of these models and face restrictions on usage, which requires strategic management of available resources.
From my experience, the free models available on different platforms have provided adequate support for most tasks I have attempted.
Here is how I typically divide my efforts: I use Chat-GPT and Claude (Sonnet 3.5) for the most intensive and demanding tasks. For coding support, I rely on Copilot and some smaller models offered by these companies. For isolated questions, I usually turn to Llama or DeepSeek. Additionally, for quick queries and code fixes, I find Sonar-Pro, available on Perplexity Labs, to strike the perfect balance. I also alternate with the AI search engine from DuckDuckGo, where flagship free models from OpenAI, Meta, and Anthropic are available and can be used with notable efficiency. Typically, I assess the complexity of my problem to determine where to start on this hierarchy of models. If the first model I try cannot resolve the issue promptly, I will move up to a more powerful option as needed.
Speeding up literature reviews
In a previous post, I briefly mentioned scite. This tool, combined with its recent feature called “Assistant”, is designed to support researchers in conducting literature reviews by integrating AI with a well-structured network of scientific databases. Users can ask questions about a specific topic, and scite will provide a detailed answer, citing relevant papers accordingly. All cited papers are listed, allowing users to explore any specific entry further if they want to learn more.
There are also various settings to customize the literature search, including options to filter by publication year, journal, or publisher. My impression is that this product has significantly improved since I first tried it years ago. The answers feel much more specific, and the literature usually aligns well with the topic or question posed. Typically, I use this tool when I have an open-ended question about an unfamiliar subject. The responses are well-structured and help me identify useful articles to serve as a starting point for further research. Additionally, during writing tasks, this tool is invaluable for gaining insights into the field, particularly for the discussion and results sections, where it is essential to frame our findings within the current perspectives of the field.
Simplifying manual work
With a more practical perspective in mind, I have found that large language models (LLMs) are tremendously helpful for specific tasks. One of the most impressive features is their ability to parse information from images. It amazes me that I can take a screenshot of a table or figure and ask Claude or ChatGPT to format it into a structured data container, such as a table or script. They can read the image and produce an output that is almost always accurate.
Additionally, the ability to format content at scale is fantastic. Tasks that would typically take hours of manual labor and require extreme attention to detail to avoid errors in copying and pasting can now be completed in minutes with the support of LLMs. This efficiency is particularly evident in coding, where LLMs can assist with tasks like writing documentation or formatting functions according to best practices.
Overall, while these applications may seem simple, the ability to delegate them to an assistant that can complete them faster and often with improved accuracy frees up mental space for me to focus on more demanding tasks.
Image generation
Recent discussions regarding the use of AI-generated content in scientific articles have sparked some controversy, particularly with AI image generation. However, I would like to emphasize how these tools can enhance presentations and scientific discussions by enabling researchers to create visual representations of their ideas. I have been experimenting with several AI image generators, including FLUX and Qwen-2.5, and have observed their performance firsthand. Crafting a well-formed prompt is not only challenging but also a great mental exercise that helps achieve the desired output. It is important to acknowledge the use of these tools in any professional context. In our group, I have noticed positive reactions when such content is used to initiate discussions or support brainstorming sessions.
Conclusion
AI and large language models (LLMs) are advancing rapidly, and I am genuinely excited to see how these tools will evolve and benefit both the scientific community and the general public. Despite the incredible pace at which new tools are being introduced, it feels like we are only at the beginning of making such assistants accessible to everyone.
Please feel free to share your thoughts about using LLMs in research!
Have a great day!
Acknowledgements
Parts of this text were edited with Grammarly AI.