rhondamuse.com

The Challenge of Choosing Between AI Research Assistants

Written on

I recently faced a challenge with Google’s latest AI, Gemini Ultra 1.0, while my trusty assistant, ChatGPT, remained unfazed. This raises the question: can Gemini truly replace ChatGPT as a research aide? More fundamentally, can generative AI serve as a credible research assistant for writers on platforms like Medium?

As a science writer, my aim is to engage with readers who seek to understand not just the events unfolding around them but the reasons behind these occurrences. In an increasingly chaotic world, my writing strives to bring clarity through scientific insights.

My articles rely heavily on research. Although I craft my own pieces, I often find myself delving into complex subjects that extend beyond my expertise. Unlike established journalists who have access to research teams and specialized resources, I navigate these challenges alone.

I lack commercial or academic ties and my writing budget barely stays in the positive. For writers like myself, the emergence of generative AI has been a game changer. I've eagerly explored various GPT-based applications, including AI-Writer, Perplexity.ai, ChatPDF, Scholarcy, Grammarly, Hemingway Editor, and Scite. However, the giants of the field, OpenAI's ChatGPT and Google's Bard, have overshadowed these niche tools.

After extensive queries with both ChatGPT and Bard, I settled on ChatGPT as my primary research assistant, with Bard serving as a secondary option for trending topics. While ChatGPT provided in-depth analyses, its responses were limited to the latest date of its training. Bard, conversely, offered less detail but had the advantage of real-time internet searching. Both models have progressed since their initial launch.

The Arrival of Gemini: A New Contender On February 11, 2024, I received an email from Google announcing the introduction of their enhanced AI model, Gemini Ultra 1.0, available for a subscription of $19.99 per month. The promotion highlighted its advanced capabilities in coding, logical reasoning, nuanced instructions, and creative collaboration.

Faced with this new option, I decided to evaluate both ChatGPT and Gemini during a two-month free trial. I posed a series of questions ranging from simple to complex. While both performed adequately, it was a particularly challenging question that revealed the true capabilities of each.

Testing Gemini Ultra 1.0 Against ChatGPT4 I began my assessment with straightforward inquiries, such as "What are the updated NFL overtime rules?" Both AI models correctly provided the latest information. Next, I increased the difficulty by asking about "hematopoiesis," a topic I was less familiar with.

While generative AI can summarize extensive Wikipedia entries, I found that ChatGPT produced more organized and thorough answers compared to Gemini. In general, the content from both models was similar, but ChatGPT's style and depth appealed to me more.

However, I wanted to test their limits with a more challenging question. I presented both AI systems with a complex query:

Gemini's Response to the Cannonball Challenge

Gemini's response analysis

My Prompt to Gemini: What psychological and social processes lead a group of people to form a consensus on a divisive issue? For instance, how have U.S. citizens agreed that Joe Biden is too old for a second term while viewing Donald Trump as acceptable?

Gemini's Response: Elections are a complex topic with fast-changing information. To ensure you have the latest and most accurate information, try Google Search.

ChatGPT4's Response to the Cannonball Challenge

ChatGPT's response analysis

My Prompt to ChatGPT: (Refer to the exact prompt given to Gemini above.)

ChatGPT's Response: The consensus regarding political figures’ suitability, particularly based on age, results from a complex interplay of psychological and social factors. For U.S. citizens evaluating Biden and Trump, several elements contribute to this consensus formation:

Psychological Factors 1. Cognitive Biases: Individuals are prone to biases that influence their judgments, like confirmation bias, which leads them to favor information that aligns with their existing beliefs. 2. Social Identity Theory: This suggests that people’s identities are shaped by their social affiliations, including political parties, which can skew perceptions of political figures. 3. Heuristic Processing: People often make decisions based on mental shortcuts, such as associating a candidate’s qualities with their perceived role as a leader.

Social Factors 1. Influence and Norms: Opinions are shaped by social circles, where dominant views can sway consensus. 2. Media Framing: How media portrays political figures significantly affects public perception. 3. Political Campaigns: Campaign strategies can influence how age-related concerns about candidates are perceived. 4. Cultural Values: Societal beliefs about age and competence also play a role, with differing views based on cultural context.

These psychological and social factors are interconnected and evolve with current events and shifts in public opinion. While a consensus may appear on topics like age suitability for leadership, opinions can vary widely among different demographic groups.

Determining My Preferred AI Research Assistant As evident from my experiments, Gemini faltered on a challenging question, while ChatGPT delivered a comprehensive response in under 20 seconds. Despite multiple attempts, Gemini consistently failed to provide a satisfactory answer.

Occasionally, I received decent responses from Gemini, but often they ended with the suggestion to "try Google Search." So, which AI should I choose as my primary research assistant?

For the moment, ChatGPT remains my top choice, with Gemini as a close second.

Ethical Considerations of Using Generative AI in Research In comparing these two AI tools for their usefulness in studying human behavior, I anticipate readers questioning the ethics of utilizing AI for research. Am I compromising integrity by relying on AI-generated prompts?

Medium’s CEO, Tony Stubblebein, has criticized AI companies for taking copyrighted material from the internet without offering writers proper acknowledgment or compensation.

Generative AI is still in its nascent stages, and its potential for misuse is significant, ranging from deep fakes to misleading audio representations. Such issues could have serious repercussions, particularly if political narratives are manipulated.

As Stubblebein points out, there are ethical dilemmas surrounding the use of copyrighted content in AI training data. This challenge is being addressed globally, with various stakeholders seeking solutions.

Having built a personal library of articles and news clippings, I wonder if I should disregard the AI tools that could enhance my research efforts. I believe that the ethical implications of using online content in AI models are being actively discussed by regulators.

For now, I see generative AI as a legitimate research tool for several reasons: 1. We are part of a globally connected community. 2. Nations face shared challenges, including resource scarcity and climate issues. 3. Science and technology are crucial for addressing these challenges. 4. No single discipline can solve these complex problems alone. 5. Collaborative approaches across disciplines and cultures are essential. 6. Large language models represent a valuable opportunity to create a shared knowledge base for researchers and policymakers.

As a science writer tackling issues that require broad interdisciplinary perspectives, I believe generative AI will increasingly provide access to a wealth of knowledge.

I ensure that all my articles are original and based on credible sources for any scientific claims made by the AI tools. Although my writing process can be lengthy, AI assistance helps streamline my research, making it more efficient. I appreciate the support from ChatGPT and the evolving capabilities of Gemini, which I expect will continue to improve.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Revisiting Alexander's 'Rat Park': The Digital Age's Isolation

Exploring how modern technology impacts human connection and mental health through the lens of Bruce Alexander's Rat Park experiment.

Finding Freedom: A Path to Releasing Bitterness and Regret

Explore the journey of overcoming bitterness and regret to reclaim joy and peace in life.

Nurturing the Blue Revolution: Advancements in Aquaculture

Explore how aquaculture, known as the Blue Revolution, is shaping sustainable seafood production and addressing global demand.

Investing During Market Downturns: Strategies for Success

Explore effective strategies for investing during market downturns and learn how to identify attractive stock opportunities.

The Moon: A Cosmic Conundrum or Just a Distraction?

An unconventional look at the moon's relevance, challenging its necessity while humorously discussing its impact on culture and behavior.

Finding Balance: How to Ask for Space in a Relationship

Discover how to request personal space in a relationship without causing misunderstandings or hurt feelings.

Essential Machine Learning Concepts You Should Know Today

Discover the key concepts of machine learning and artificial intelligence, including types, applications, and techniques for effective understanding.

Effective Form Validation in Vue 3 Using Vee-Validate 4

Explore how to efficiently implement form validation in Vue 3 apps using Vee-Validate 4 for radio buttons and checkboxes.