Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

PrepScholar

Choose Your Test

  • Search Blogs By Category
  • College Admissions
  • AP and IB Exams
  • GPA and Coursework

Can You Use ChatGPT for Your College Essay?

author image

College Admissions , College Essays

feaeture-robot-writing-studying-AI-cc0

ChatGPT has become a popular topic of conversation since its official launch in November 2022. The artificial intelligence (AI) chatbot can be used for all sorts of things, like having conversations, answering questions, and even crafting complete pieces of writing.

If you’re applying for college, you might be wondering about ChatGPT college admissions’ potential.  Should you use a ChatGPT college essay in your application ?

By the time you finish reading this article, you’ll know much more about ChatGPT, including how students can use it responsibly and if it’s a good idea to use ChatGPT on college essays . We’ll answer all your questions, like:

  • What is ChatGPT and why are schools talking about it?
  • What are the good and bad aspects of ChatGPT?
  • Should you use ChatGPT for college essays and applications?
  • Can colleges detect ChatGPT?
  • Are there other tools and strategies that students can use, instead?

We’ve got a lot to cover, so let’s get started!

body-robot-teacher-cc0-1

Schools and colleges are worried about how new AI technology affects how students learn. (Don't worry. Robots aren't replacing your teachers...yet.)

What Is ChatGPT and Why Are Schools Talking About It?

ChatGPT (short for “Chat Generative Pre-trained Transformer”) is a chatbot created by OpenAI , an artificial intelligence research company. ChatGPT can be used for various tasks, like having human-like conversations, answering questions, giving recommendations, translating words and phrases—and writing things like essays. 

In order to do this, ChatGPT uses a neural network that’s been trained on thousands of resources to predict relationships between words. When you give ChatGPT a task, it uses that knowledge base to interpret your input or query. It then analyzes its data banks to predict the combinations of words that will best answer your question. 

So while ChatGPT might seem like it’s thinking, it’s actually pulling information from hundreds of thousands of resources , then answering your questions by looking for patterns in that data and predicting which words come next.  

Why Schools Are Concerned About ChatGPT

Unsurprisingly, schools are worried about ChatGPT and its misuse, especially in terms of academic dishonesty and plagiarism . Most schools, including colleges, require students’ work to be 100% their own. That’s because taking someone else’s ideas and passing them off as your own is stealing someone else’s intellectual property and misrepresenting your skills. 

The problem with ChatGPT from schools’ perspective is that it does the writing and research for you, then gives you the final product. In other words, you’re not doing the work it takes to complete an assignment when you’re using ChatGPT , which falls under schools’ plagiarism and dishonesty policies.  

Colleges are also concerned with how ChatGPT will negatively affect students’ critical thinking, research, and writing skills . Essays and other writing assignments are used to measure students’ mastery of the material, and if students submit ChatGPT college essays, teachers will just be giving feedback on an AI’s writing…which doesn’t help the student learn and grow. 

Beyond that, knowing how to write well is an important skill people need to be successful throughout life. Schools believe that if students rely on ChatGPT to write their essays, they’re doing more than just plagiarizing—they’re impacting their ability to succeed in their future careers. 

Many Schools Have Already Banned ChatGPT

Schools have responded surprisingly quickly to AI use, including ChatGPT. Worries about academic dishonesty, plagiarism, and mis/disinformation have led many high schools and colleges to ban the use of ChatGPT . Some schools have begun using AI-detection software for assignment submissions, and some have gone so far as to block students from using ChatGPT on their internet networks. 

It’s likely that schools will begin revising their academic honesty and plagiarism policies to address the use of AI tools like ChatGPT. You’ll want to stay up-to-date with your schools’ policies. 

body-technical-problem-oops-cc0

ChatGPT is pretty amazing...but it's not a great tool for writing college essays. Here's why.

ChatGPT: College Admissions and Entrance Essays

College admissions essays—also called personal statements—ask students to explore important events, experiences, and ideas from their lives. A great entrance essay will explain what makes you you !  

ChatGPT is a machine that doesn’t know and can’t understand your experiences. That means using ChatGPT to write your admissions essays isn’t just unethical. It actually puts you at a disadvantage because ChatGPT can’t adequately showcase what it means to be you. 

Let’s take a look at four ways ChatGPT negatively impacts college admissions essays.

#1: ChatGPT Lacks Insight

We recommend students use u nexpected or slightly unusual topics because they help admissions committees learn more about you and what makes you unique. The chat bot doesn’t know any of that, so nothing ChatGPT writes can’t accurately reflect your experience, passions, or goals for the future. 

Because ChatGPT will make guesses about who you are, it won’t be able to share what makes you unique in a way that resonates with readers. And since that’s what admissions counselors care about, a ChatGPT college essay could negatively impact an otherwise strong application.  

#2: ChatGPT Might Plagiarize 

Writing about experiences that many other people have had isn’t a very strong approach to take for entrance essays . After all, you don’t want to blend in—you want to stand out! 

If you write your essay yourself and include key details about your past experiences and future goals, there’s little risk that you’ll write the same essay as someone else. But if you use ChatGPT—who’s to say someone else won’t, too? Since ChatGPT uses predictive guesses to write essays, there’s a good chance the text it uses in your essay already appeared in someone else’s.  

Additionally, ChatGPT learns from every single interaction it has. So even if your essay isn’t plagiarized, it’s now in the system. That means the next person who uses ChatGPT to write their essay may end up with yours. You’ll still be on the hook for submitting a ChatGPT college essay, and someone else will be in trouble, too.

#3: ChatGPT Doesn’t Understand Emotion 

Keep in mind that ChatGPT can’t experience or imitate emotions, and so its writing samples lack, well, a human touch ! 

A great entrance essay will explore experiences or topics you’re genuinely excited about or proud of . This is your chance to show your chosen schools what you’ve accomplished and how you’ll continue growing and learning, and an essay without emotion would be odd considering that these should be real, lived experiences and passions you have!

#4: ChatGPT Produced Mediocre Results

If you’re still curious what would happen if you submitted a ChatGPT college essay with your application, you’re in luck. Both Business Insider and Forbes asked ChatGPT to write a couple of college entrance essays, and then they sent them to college admissions readers to get their thoughts. 

The readers agreed that the essays would probably pass as being written by real students—assuming admissions committees didn’t use AI detection software—but that they both were about what a “very mediocre, perhaps even a middle school, student would produce.” The admissions professionals agreed that the essays probably wouldn’t perform very well with entrance committees, especially at more selective schools.  

That’s not exactly the reaction you want when an admission committee reads your application materials! So, when it comes to ChatGPT college admissions, it’s best to steer clear and write your admission materials by yourself. 

body-magnifying-glass-icon-cc0

Can Colleges Detect ChatGPT?

We’ve already explained why it’s not a great idea to use ChatGPT to write your college essays and applications , but you may still be wondering: can colleges detect ChatGPT? 

In short, yes, they can! 

Software Can Detect ChatGPT

As technology improves and increases the risk of academic dishonesty, plagiarism, and mis/disinformation, software that can detect such technology is improving, too. For instance, OpenAI, the same company that built ChatGPT, is working on a text classifier that can tell the difference between AI-written text and human-written text .  

Turnitin, one of the most popular plagiarism detectors used by high schools and universities, also recently developed the AI Innovation Lab —a detection software designed to flag submissions that have used AI tools like ChatGPT. Turnitin says that this tool works with 98% confidence in detecting AI writing. 

Plagiarism and AI companies aren’t the only ones interested in AI-detection software. A 22-year old computer science student at Princeton created an app to detect ChatGPT writing, called Zero GPT. This software works by measuring the complexity of ideas and variety of sentence structures.  

Human Readers Can Detect ChatGPT 

It’s also worth keeping in mind that teachers can spot the use of ChatGPT themselves , even if it isn’t confirmed by a software detector. For example, if you’ve turned in one or two essays to your teacher already, they’re probably familiar with your unique writing style. If you submit a college essay draft essay that uses totally different vocabulary, sentence structures, and figures of speech, your teacher will likely take note.

Additionally , admissions committees and readers may be able to spot ChatGPT writing, too. ChatGPT (and AI writing, in general) uses more simplistic sentence structures with less variation, so that could make it easier to tell if you’ve submitted a ChatGPT college essay. These professionals also read thousands of essays every year, which means they know what a typical essay reads like. You want your college essay to catch their attention…but not because you used AI software! 

body-children-celebrating-computer-cc0

If you use ChatGPT responsibly, you can be as happy as these kids.

Pros and Cons of ChatGPT: College Admissions Edition

ChatGPT is a brand new technology, which means we’re still learning about the ways it can benefit us. It’s important to think about the pros and the cons to any new tool …and that includes artificial intelligence!

Let’s look at some of the good—and not-so-good—aspects of ChatGPT below. 

ChatGPT: The Good

It may seem like we’re focused on just the negatives of using ChatGPT in this article, but we’re willing to admit that the chatbot isn’t all bad. In fact, it can be a very useful tool for learning if used responsibly !

Like we already mentioned, students shouldn’t use ChatGPT to write entire essays or assignments. They can use it, though, as a learning tool alongside their own critical thinking and writing skills.

Students can use ChatGPT responsibly to:

  • Learn more about a topic . It’s a great place to get started for general knowledge and ideas about most subjects.
  • Find reputable and relevant sources on a topic. Students can ask ChatGPT for names and information about leading scholars, relevant websites and databases, and more. 
  • Brainstorm ideas for assignments. Students can share the ideas they already have with ChatGPT, and in return, the chatbot can suggest ideas for further exploration and even organization of their points.
  • Check work (that they’ve written themselves!) for errors or cla rity. This is similar to how spell- and grammar-checking software is used. ChatGPT may be even better than some competitors for this, because students can actually ask ChatGPT to explain the errors and their solutions—not just to fix them. 

Before you use ChatGPT—even for the tasks mentioned above—you should talk to your teacher or school about their AI and academic dishonesty policies. It’s also a good idea to include an acknowledgement that you used ChatGPT with an explanation of its use. 

body-man-sad-cc0

This guy made some bad decisions using ChatGPT. Don't be this guy.

ChatGPT: The Bad

The first model of ChatGPT (GPT-3.5) was formally introduced to the public in November 2022, and the newer model (GPT-4) in March 2023. So, it’s still very new and there’s a lot of room for improvement .  

There are many misconceptions about ChatGPT. One of the most extreme is that the AI is all-knowing and can make its own decisions. Another is that ChatGPT is a search engine that, when asked a question, can just surf the web for timely, relevant resources and give you all of that information. Both of these beliefs are incorrect because ChatGPT is limited to the information it’s been given by OpenAI . 

Remember how the ‘PT’ in ChatGPT stands for “Pre-trained”? That means that every time OpenAI gives ChatGPT an update, it’s given more information to work with (and so it has more information to share with you). In other words, it’s “trained” on information so it can give you the most accurate and relevant responses possible—but that information can be limited and biased . Ultimately, humans at OpenAI decide what pieces of information to share with ChatGPT, so it’s only as accurate and reliable as the sources it has access to.

For example, if you were to ask ChatGPT-3.5 what notable headlines made the news last week, it would respond that it doesn’t have access to that information because its most recent update was in September 2021!

You’re probably already familiar with how easy it can be to come across misinformation, misleading and untrue information on the internet. Since ChatGPT can’t tell the difference between what is true and what isn’t, it’s up to the humans at OpenAI to make sure only accurate and true information is given to the chatbot . This leaves room for human error , and users of ChatGPT have to keep that in mind when using and learning from the chatbot.

These are just the most obvious problems with ChatGPT. Some other problems with the chatbot include:

  • A lack of common sense. ChatGPT can create seemingly sensical responses to many questions and topics, but it doesn’t have common sense or complete background knowledge.
  • A lack of empathy. ChatGPT doesn’t have emotions, so it can’t understand them, either. 
  • An inability to make decisions or problem solve . While the chatbot can complete basic tasks like answering questions or giving recommendations, it can’t solve complex tasks. 

While there are some great uses for ChatGPT, it’s certainly not without its flaws.

body-bootcamp-cc0

Our bootcamp can help you put together amazing college essays that help you get into your dream schools—no AI necessary.

What Other Tools and Strategies Can Help Students Besides ChatGPT?

While it’s not a good idea to use ChatGPT for college admissions materials, it’s not the only tool available to help students with college essays and assignments.

One of the best strategies students can use to write good essays is to make sure they give themselves plenty of time for the assignment. The writing process includes much more than just drafting! Having time to brainstorm ideas, write out a draft, revise it for clarity and completeness, and polish it makes for a much stronger essay. 

Teachers are another great resource students can use, especially for college application essays. Asking a teacher (or two!) for feedback can really help students improve the focus, clarity, and correctness of an essay. It’s also a more interactive way to learn—being able to sit down with a teacher to talk about their feedback can be much more engaging than using other tools.

Using expert resources during the essay writing process can make a big difference, too. Our article outlines a complete list of strategies for students writing college admission essays. It breaks down what the Common Application essay is, gives tips for choosing the best essay topic, offers strategies for staying focused and being specific, and more.

You can also get help from people who know the college admissions process best, like former admissions counselors. PrepScholar’s Admissions Bootcamp guides you through the entire application process , and you’ll get insider tips and tricks from real-life admissions counselors that’ll make your applications stand out. Even better, our bootcamp includes step-by-step essay writing guidance, so you can get the help you need to make sure your essay is perfect.

If you’re hoping for more technological help, Grammarly is another AI tool that can check writing for correctness. It can correct things like misused and misspelled words and grammar mistakes, and it can improve your tone and style. 

It’s also widely available across multiple platforms through a Windows desktop app, an Android and iOS app, and a Google Chrome extension. And since Grammarly just checks your writing without doing any of the work for you, it’s totally safe to use on your college essays. 

The Bottom Line: ChatGPT College Admissions and Essays

ChatGPT will continue to be a popular discussion topic as it continues evolving. You can expect your chosen schools to address ChatGPT and other AI tools in their academic honesty and plagiarism policies in the near future—and maybe even to restrict or ban the use of the chatbot for school admissions and assignments.

As AI continues transforming, so will AI-detection. The goal is to make sure that AI is used responsibly by students so that they’re avoiding plagiarism and building their research, writing, and critical thinking skills. There are some great uses for ChatGPT when used responsibly, but you should always check with your teachers and schools beforehand.

ChatGPT’s “bad” aspects still need improving, and that’s going to take some time.Be aware that the chatbot isn’t even close to perfect, and it needs to be fact-checked just like other sources of information.

Similarly to other school assignments, don’t submit a ChatGPT college essay for college applications, either. College entrance essays should outline unique and interesting personal experiences and ideas, and those can only come from you.  

Just because ChatGPT isn’t a good idea doesn’t mean there aren’t resources to help you put together a great college essay. There are many other tools and strategies you can use instead of ChatGPT , many of which have been around for longer and offer better feedback. 

body-next-future-cc0

What’s Next?

Ready to write your college essays the old-fashioned way? Start here with our comprehensive guide to the admissions essays.  

Most students have to submit essays as part of their Common Application . Here's a complete breakdown of the Common App prompts —and how to answer them.

The most common type of essay answers the "why this college?" prompt. We've got an expert breakdown that shows you how to write a killer response , step by step. 

Want to write the perfect college application essay?   We can help.   Your dedicated PrepScholar Admissions counselor will help you craft your perfect college essay, from the ground up. We learn your background and interests, brainstorm essay topics, and walk you through the essay drafting process, step-by-step. At the end, you'll have a unique essay to proudly submit to colleges.   Don't leave your college application to chance. Find out more about PrepScholar Admissions now:

Trending Now

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

ACT vs. SAT: Which Test Should You Take?

When should you take the SAT or ACT?

Get Your Free

PrepScholar

Find Your Target SAT Score

Free Complete Official SAT Practice Tests

How to Get a Perfect SAT Score, by an Expert Full Scorer

Score 800 on SAT Math

Score 800 on SAT Reading and Writing

How to Improve Your Low SAT Score

Score 600 on SAT Math

Score 600 on SAT Reading and Writing

Find Your Target ACT Score

Complete Official Free ACT Practice Tests

How to Get a Perfect ACT Score, by a 36 Full Scorer

Get a 36 on ACT English

Get a 36 on ACT Math

Get a 36 on ACT Reading

Get a 36 on ACT Science

How to Improve Your Low ACT Score

Get a 24 on ACT English

Get a 24 on ACT Math

Get a 24 on ACT Reading

Get a 24 on ACT Science

Stay Informed

Get the latest articles and test prep tips!

Follow us on Facebook (icon)

Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

  • Using ChatGPT for Assignments | Tips & Examples

Using ChatGPT for Assignments | Tips & Examples

Published on February 13, 2023 by Jack Caulfield and Tobias Solis. Revised on November 16, 2023.

People are still figuring out the best use cases for ChatGPT , the popular chatbot based on a powerful AI language model. This article provides some ideas for how to use ChatGPT and other AI tools to assist with your academic writing .

These tools can assist you with the following steps when writing a paper, thesis , or dissertation :

  • Developing a research question
  • Creating an outline
  • Generating literature ideas

Paraphrasing and summarizing text

  • Getting feedback

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Developing a research question with chatgpt, creating an outline of your paper with chatgpt, generate literature ideas, getting feedback from chatgpt, limitations of chatgpt, other interesting articles, frequently asked questions.

You can use ChatGPT to help you formulate a clear and concise research question by generating and improving potential questions.

With the help of ChatGPT, you can develop a large number of possible research questions within a short time. This is a great way to generate ideas and different approaches to a topic.

However, make sure that the generated questions make sense and meet your needs. As you can see from our example, you can keep working with ChatGPT to adapt its suggestions to your needs (or just edit the text directly if you already have a clear idea of what you want).

First input

Develop three possible research questions on the following topic: “The influence of social media on political opinion formation.”

ChatGPT output

  • To what extent does exposure to diverse political perspectives on social media affect individuals’ political opinions and behaviors?
  • How does the personalized algorithm of social media platforms shape the formation of political opinions among its users?
  • What are the underlying factors that determine the effect of social media on political opinion formation, and how can they be measured and analyzed?

Second input

Based on the second research question, develop three further research questions on the topic of polarization in US politics.

  • Build my scientific method, qualitative discourse analysis, into each question
  • Include the term “filter bubble” in each question
  • How does the filter bubble effect on social media platforms contribute to the polarization of political attitudes in the US? A qualitative discourse analysis.
  • What is the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US? A qualitative discourse analysis.
  • How does the influence of filter bubbles on social media affect the ability of individuals to engage in productive political discourse across ideological divides in the US? A qualitative discourse analysis.

Third input

Try to incorporate “qualitative discourse analysis” into the questions themselves, instead of using it as a separate sentence.

  • How does a qualitative discourse analysis of filter bubble effect on social media platforms contribute to understanding the polarization of political attitudes in the US?
  • What is the contribution of a qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US?
  • How does a qualitative discourse analysis of the influence of filter bubbles on social media inform our understanding of the ability of individuals to engage in productive political discourse across ideological divides in the US?

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

chat gpt for scholarship essays

Try for free

You can also use ChatGPT to help you draft a research paper outline or thesis outline . To do this, try generating possible headings and subheadings and then improving them. ChatGPT can help to generate a clear and well-structured outline, especially if you keep adjusting the structure with its help.

However, it doesn’t have a deep understanding of your topic. As a result, inappropriate headings may sometimes be generated. It’s important to check that the outline makes sense and covers all the points needed to answer your research question.

You should also ensure that the headings and subheadings follow a logical structure. Scribbr’s Proofreading & Editing service can help you with this.

Create an outline for an academic research paper with the following research question:

“What is the contribution of a qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US?”

The paper will be about 8 pages long.

  • Background information on filter bubbles and political echo chambers
  • Purpose of the study
  • Research question
  • Thesis statement
  • Definition and concept of filter bubbles
  • Definition and concept of political echo chambers
  • Previous studies on the role of filter bubbles in perpetuating and reinforcing political echo chambers
  • Critiques and limitations of previous studies
  • Research design
  • Participants
  • Data collection and analysis
  • Validity and reliability
  • Findings from qualitative discourse analysis
  • Results related to the research question
  • Implications of the findings
  • Interpretation of the results
  • Comparison with previous studies
  • Limitations of the study
  • Implications for future research
  • Summary of the results
  • Implications for understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers
  • Conclusion and recommendations for future research
  • List of references used in the paper
  • Data collection materials
  • Examples of qualitative data analysis

This seems like too much information to fit into only 8 pages. Can you condense the outline, for example removing the appendices and simplifying the structure?

  • Background on filter bubbles and political echo chambers
  • Definition of filter bubbles and political echo chambers
  • Previous studies on the role of filter bubbles
  • Limitations of previous studies
  • Implications for understanding the role of filter bubbles
  • Final thoughts

ChatGPT can be a useful way to generate ideas regarding what sources you might use in your work. Try asking it for suggestions about where to look and what to consider.

However, we advise against using it to find sources directly. It’s not designed for this and, if asked to do it, often just creates fake sources and presents them as real. Search a database like Google Scholar instead.

What types of sources could I use to write a paper on the following research question?

When writing a paper on the contribution of qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US, you could use the following types of sources:

  • Academic journals and articles: These are a good place to start when you’re looking for information on specific topics and they can provide you with in-depth analysis and perspectives on the topic at hand.
  • Books and book chapters: These sources can provide comprehensive overviews of a particular topic and can also offer theoretical frameworks and historical perspectives.
  • Government reports and studies: Government agencies, such as the Federal Communications Commission (FCC), may have conducted studies on the role of filter bubbles in perpetuating political echo chambers.
  • News articles and media reports: These sources can provide insight into the current state of the issue, as well as opinions from various stakeholders.
  • Online sources: Websites, blogs, and online forums can provide a range of perspectives on the topic and can also be useful in exploring the experiences of individuals who are affected by filter bubbles and political echo chambers.
  • Data and statistics: Relevant data and statistics on the prevalence and impact of filter bubbles and political echo chambers in the US can be useful in providing evidence to support your arguments.
  • Expert interviews and focus groups: Conducting expert interviews or focus groups with individuals who have experience with or knowledge of the topic can provide valuable insights and perspectives that can be used to support your arguments.

You can also try using ChatGPT to paraphrase or summarize text. ChatGPT can quickly condense the text or generate a number of different paraphrases of the same idea. You can keep typing “again” in the chat until you are satisfied with the result.

This kind of paraphrasing/summarizing can be helpful to, for example, non-native speakers who want to improve the fluency of their text. However, be careful. The outputs often:

  • Contain grammatical errors
  • Include false information

So you should always check the output to make sure it’s correct. If you’re unsure, you can run the text through the Scribbr Grammar Checker and cross-reference the information against other, more credible sources .

An alternative is to use the Scribbr Paraphrasing Tool or Scribbr’s free text summarizer , which were designed specifically for these purposes.

Paraphrase the following text:

“A filter bubble results from a social media algorithm’s attempts to cater to an individual user based on data such as their search history and previous behavior on the site. It results in the user only being presented with information that confirms their biases.”

Restructure the sentences and use other phrasings for the following parts:

  • “results from”
  • “only being presented with”

“A filter bubble is created when a social media algorithm customizes its content for a particular user based on their data, such as their search history and past actions on the platform. This leads to the user being confined to information that reinforces their pre-existing beliefs.” Note We strongly advise against having ChatGPT write text for you and trying to pass it off as your own work. This is academically dishonest and is widely considered plagiarism . It will usually result in an automatic fail, as well as other potential consequences .

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

Fix mistakes for free

You can also use ChatGPT to get feedback on the argumentation and structure of your paper and suggestions for improvements.

An advantage of this is that you can get feedback from ChatGPT on different text excerpts at any time. However, one downside is that ChatGPT does not have a deep understanding of your research question, so its feedback and suggestions may be shallow or inappropriate.

This kind of feedback cannot replace input from an experienced academic supervisor who’s familiar with your topic. Use ChatGPT feedback as a supplement, not a substitute for human feedback.

If you need in-depth feedback on your writing from someone with academic expertise, try Scribbr’s Proofreading & Editing service .

[Text you want to get feedback on.]

Thank you for submitting your research paper for review. Let me provide you with my constructive criticism:

Clarity of Explanations: Overall, the explanations are clear and concise, but there are a few areas that could benefit from additional clarification. For example, in the first paragraph, you mention “the mental lexicon,” but do not provide a definition or further explanation for what this is. It would also be helpful to provide a definition of “lemma.”

Smoothness of Language: The language in the paper is generally smooth, but there are a few instances where it could be refined. For example, the sentence “The symptoms vary from adding superfluous words and taking pauses to hoarseness of the voice” could be rephrased to “The symptoms of these disorders can range from excessive use of words and pauses to hoarseness.”

Structure of Text: The structure of the text is clear, with each paragraph building on the previous one. However, the transition between the first two paragraphs could be smoother. Perhaps you could add a sentence or two to bridge the gap between the two topics.

Even though ChatGPT is a powerful writing tool, you should keep the following limitations in mind:

No original research

Risk of plagiarism, fake sources, fake “facts”.

ChatGPT was trained using existing texts, and the texts created by the AI are based on this data. It’s not capable of doing its own research, only of synthesizing sources that it was trained on.

Additionally, because the texts that it was trained on only go up to 2021, it’s unfamiliar with more recent world events, cultural trends, and developments in different fields of research. This means it may provide outdated information.

Because ChatGPT was trained using existing text, the AI tends to reuse that content in a way that may be considered plagiarism. The AI may copy text from an online publication in its responses.

Additionally, using ChatGPT to write your assignment for you is considered plagiarism regardless of where it got the content from, because you’re passing off something you didn’t write as your own work.

ChatGPT was trained on a wide variety of sources, but it’s not able to accurately tell you what sources it’s using for any given output. Worse, it has a tendency to make up sources that don’t actually exist when you ask.

In addition, ChatGPT is not designed as a citation generator and cannot add correctly formatted citations to your work. Instead, try a tool designed for this purpose, like the Scribbr Citation Generator .

ChatGPT is not always trustworthy when it comes to giving you factually correct information. While it doesn’t deliberately lie, it also doesn’t have “knowledge” in a traditional sense, and it lacks the ability to verify its claims.

Instead, its responses are based on recognizing and reproducing patterns that it has seen in other texts. This means that if it doesn’t “know” something, it tends to guess. And its guesses are presented with full confidence, just like established facts.

So it’s very important that you carefully check any factual claims made by ChatGPT against more credible sources —and don’t cite ChatGPT as a source of factual information for your research.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Deep learning
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

You can use ChatGPT to assist in the writing process for your research paper , thesis , or dissertation in the following ways:

  • Paraphrasing text

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT , you’ll still be penalized unless this is specifically allowed by your university . Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

No, it is not possible to cite your sources with ChatGPT . You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

Instead, use a tool designed for this purpose, like the Scribbr Citation Generator .

But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. & Solis, T. (2023, November 16). Using ChatGPT for Assignments | Tips & Examples. Scribbr. Retrieved September 23, 2024, from https://www.scribbr.com/ai-tools/chatgpt-assignments/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, writing strong research questions | criteria & examples, how to create a structured research paper outline | example, student guide: information literacy | meaning & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Share full article

Advertisement

Supported by

Student Opinion

Should Students Let ChatGPT Help Them Write Their College Essays?

If so, how? Tell us what you are thinking, and what practical and ethical questions these new A.I. tools raise for you.

Natasha Singer

Hey, ChatGPT, can you help me write my college admissions essays?

Absolutely! Please provide me with the essay prompts and any relevant information about yourself, your experiences, and your goals.

Katherine Schulten

By Katherine Schulten

Teachers: We also have a lesson plan that accompanies this Student Opinion forum.

Are you working on a college application essay? Have you sought help from an adult? How about from an A.I. chatbot like ChatGPT or Bard? Were either useful? If so, how?

The New York Times recently published two articles about the questions these new tools are raising for the college process. One explores how A.I. chatbots are upending essay-writing. The other details what happened when a reporter fed application questions from Harvard, Yale, Princeton and Dartmouth to different bots.

Here’s how the first article, “ Ban or Embrace? Colleges Wrestle With A.I.-Generated Admissions Essays ,” explains what’s going on:

The personal essay has long been a staple of the application process at elite colleges, not to mention a bane for generations of high school students. Admissions officers have often employed applicants’ essays as a lens into their unique character, pluck, potential and ability to handle adversity. As a result, some former students say they felt tremendous pressure to develop, or at least concoct, a singular personal writing voice. But new A.I. tools threaten to recast the college application essay as a kind of generic cake mix, which high school students may simply lard or spice up to reflect their own tastes, interests and experiences — casting doubt on the legitimacy of applicants’ writing samples as authentic, individualized admissions yardsticks.

The piece continues:

Some teachers said they were troubled by the idea of students using A.I. tools to produce college essay themes and texts for deeper reasons: Outsourcing writing to bots could hinder students from developing important critical thinking and storytelling skills. “Part of the process of the college essay is finding your writing voice through all of that drafting and revising,” said Susan Barber, an Advanced Placement English literature teacher at Midtown High School, a public school in Atlanta. “And I think that’s something that ChatGPT would be robbing them of.” In August, Ms. Barber assigned her 12th-grade students to write college essays. This week, she held class discussions about ChatGPT, cautioning students that using A.I. chatbots to generate ideas or writing could make their college essays sound too generic. She advised them to focus more on their personal views and voices. Other educators said they hoped the A.I. tools might have a democratizing effect. Wealthier high school students, these experts noted, often have access to resources — alumni parents, family friends, paid writing coaches — to help them brainstorm, draft and edit their college admissions essays. ChatGPT could play a similar role for students who lack such resources, they said, especially those at large high schools where overworked college counselors have little time for individualized essay coaching.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form .

5 ways ChatGPT can help you write an essay

screenshot-2024-03-27-at-4-28-37pm.png

ChatGPT  is capable of doing many different things very well. One of the biggest standout features is its ability to compose all sorts of text within seconds, including songs, poems, bedtime stories, and  essays . 

The chatbot's writing abilities are not only fun to experiment with but can help provide assistance with everyday tasks. Whether you are a student, a working professional, or just trying to get stuff done, we constantly take time to compose emails, texts, reports, and more. ChatGPT can help you claim some of that time back by helping you brainstorm and even compose any text you need. 

How to use ChatGPT to write: Code | Excel formulas | Resumes  | Cover letters  

Contrary to popular belief, ChatGPT can do much more than write an essay for you from scratch (which would be considered plagiarism). A more useful way to use the chatbot is to have it guide your writing process. 

Below, we show you how to use ChatGPT for both writing and assisting and include some other helpful writing tips. 

How ChatGPT can help you write an essay

If you are looking to use ChatGPT to support or replace your writing, here are five different techniques to explore. 

It is also worth noting before you get started that other AI chatbots can output the same results as ChatGPT or are even better, depending on your needs. For example,  Copilot , Perplexity , and Gemini also have access to the internet and include footnotes linking back to the original source for all of their responses, making the chatbots solid alternatives if you rather skip out on ChatGPT. 

Also:  The best AI chatbots of 2024: ChatGPT and alternatives

Regardless of which AI chatbot you pick, you can use the tips below to get the most out of your prompts and AI assistance.

1. Use ChatGPT to generate essay ideas

Before you start writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that gives them leeway for their own self-expression and analysis. 

As a result, students have the task of finding the angle to approach the essay on their own. If you have written an essay recently, you know that finding the angle is often the trickiest part -- and this is where ChatGPT can help. 

Also: How do AI checkers actually work?

All you need to do is input the assignment topic, include as much detail as you'd like -- such as what you're thinking about covering -- and let ChatGPT do the rest. For example, based on a paper prompt I had in college, I asked:

Can you help me come up with a topic idea for this assignment, "You will write a research paper or case study on a leadership topic of your choice." I would like it to include Blake and Mouton's Managerial Leadership Grid and possibly a historical figure. 

Within seconds, the chatbot produced a response that provided me with the title of the essay, options of historical figures on which to focus my article, insight into what information I could include in my paper, and specific examples of a case study I could use. 

2. Use the chatbot to create an outline

Once you have a solid topic, it's time to start brainstorming what you actually want to include in the essay. To facilitate the writing process, I always create an outline, including all the different points I want to touch upon in my essay. However, the outline-writing process is usually tedious. 

With ChatGPT, all you have to do is ask it to write the outline for you. 

Using the topic that ChatGPT helped me generate in step one, I asked the chatbot to write me an outline by saying: 

Can you create an outline for a paper, "Examining the Leadership Style of Abraham Lincoln through Blake and Mouton's Managerial Leadership Grid."

Also: How my 4 favorite AI tools help me get more done at work

After a few seconds, the chatbot produced a holistic outline divided into seven sections, with three points under each section. 

This outline is thorough and can be condensed for a shorter essay or elaborated on for a longer paper. If you don't like something or want to tweak the outline further, you can do so either manually or with more instructions to ChatGPT. 

If you want ChatGPT to include links and sources throughout, you can ask it to and it will regenerate the answer using its web-browsing feature , further expediting your essay-writing process. 

3. Use ChatGPT to find sources

Now that you know exactly what you want to write, it's time to find reputable sources for your information. If you don't know where to start, you can ask ChatGPT.

All you need to do is ask the AI to find sources for your essay topic. The biggest thing to remember is to include the type of source you want, whether it be web pages, books, PDFs, research, papers, etc. 

Also:   How to make ChatGPT provide sources and citations

The specifics are necessary because when you specify web pages, ChatGPT will activate the web browsing feature and include web links in its article. If you use a very general prompt, however, it will likely default to generating its answer from its database, which isn't up to date. 

For example, I asked the following: 

"Examining the Leadership Style of Abraham Lincoln through Blake and Mouton's Managerial Leadership Grid."

The chatbot outputs links, accessible right at the top of the answer under the drop-down that says "Searched # of sites." Then, within the response, it will answer your prompt conversationally, also including sources with site names and links in parathesis. 

4. Use ChatGPT to write an essay

It is worth noting that if you take the text directly from the chatbot and submit it, your work could be considered plagiarism since it is not your original work. As with any information taken from another source, text generated by an AI should be identified and credited to the chatbot in your work.

In most educational institutions, the penalties for plagiarism are severe, ranging from a failing grade to expulsion from the school. A better use of ChatGPT's writing features would be to create a sample essay to guide your writing. 

Also:  ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

If you still want ChatGPT to create an essay from scratch, enter the topic and the desired length. For example, I input the following text: 

Can you write a five-paragraph essay on the topic, "Examining the Leadership Style of Abraham Lincoln through Blake and Mouton's Managerial Leadership Grid."

Within seconds, the chatbot gave the exact output I required: a coherent, five-paragraph essay on the topic. You could then use that text to guide your own writing. 

At this point, it's worth remembering how tools like ChatGPT work : they put words together in a form that they think is statistically valid, but they don't know if what they are saying is true or accurate. 

As a result, the output you receive might include invented facts, details, or other oddities. The output might be a useful starting point for your own work but don't expect it to be entirely accurate, and always double-check the content. 

5. Use ChatGPT to co-edit your essay

Once you've written your own essay, you can use ChatGPT's advanced writing capabilities to edit the piece for you. 

You can simply tell the chatbot what you want it to edit. For example, I asked ChatGPT to edit our five-paragraph essay for structure and grammar, but other options could have included flow, tone, and more. 

Also:  How to use ChatGPT to make charts and tables

Once you ask the tool to edit your essay, it will prompt you to paste your text into the chatbot. ChatGPT will then output your essay with corrections made. This feature is particularly useful because ChatGPT edits your essay more thoroughly than a basic proofreading tool, as it goes beyond simply checking spelling. 

You can also co-edit with the chatbot, asking it to review a specific paragraph or sentence and rewrite or fix the text for clarity. Personally, I find this feature very helpful. 

Google's NotebookLM can discuss your notes with you now. How to access it (and why you should)

4 ways to become a more effective business leader, how apple, google, and microsoft can save us from ai deepfakes.

chat gpt for scholarship essays

How to Grade Essays with ChatGPT

Introduction.

The rise of large language models (LLMs) like OpenAI’s ChatGPT has opened exciting possibilities in essay grading. With its advanced natural language processing capabilities, ChatGPT offers a new dimension in assessing written work, potentially revolutionizing the grading process for educators and researchers. Let’s delve into how ChatGPT could potentially make essay grading easier, more efficient, and more accurate.

ChatGPT can analyze written content for various parameters, including content quality, argument structure, coherence, and adherence to guidelines. Whether you use a continuous scoring system (e.g., quality of writing) or a discrete one (e.g., essay positions), ChatGPT can be tailored to your specific needs, offering customized feedback for different writing styles and assignments. Literature also suggests that LLMs can significantly increase grading efficiency, alleviating some of the burden on educators (Abedi et al., 2023; Okonkwo & Ade-Ibijola, 2021; Richter et al., 2019). Imagine grading hundreds of essays and providing feedback on them – a time-consuming and tiring task. ChatGPT can automate the initial assessment, flagging essays that require further attention based on specific criteria. Additionally, ChatGPT can identify stylistic strengths and weaknesses, analyze the use of literary devices, and even point out potential inconsistencies in an argument’s logic. This could free up valuable educator time for student interaction and curriculum development.

However, caution against over-reliance on this new technology is adivsed in scenarios where biased or inaccurate models could unfairly impact individual students. It is essential to recognize both the potential advantages and limitations of LLMs. This blog post aims to delve into and reflect on ChatGPT’s capabilities for grading and classifying essays and to provide insights into the practical application of using ChatGPT in educational settings.

In this blog, we will explore:

  • Essay grading with ChatGPT and ChatGPT API
  • Steps for essay grading with ChatGPT API
  • Steps for essay classification with ChatGPT API
  • Cost & computation times

For steps 2 and 3, we will provide detailed instructions on how to access and set up the ChatGPT API, prepare and upload your text dataset, and efficiently grade or classify numerous essays. Additionally, we will compare the outcomes of human grading to those obtained through GPT grading.

## Essay Grading with ChatGPT and ChatGPT API

For a single essay, we can simply ask ChatGPT to grade as follows:

chat gpt for scholarship essays

For multiple essays, we could request ChatGPT to grade each one individually. However, when dealing with a large number of essays (e.g., 50, 100, 1000, etc.), manually grading them in this way becomes a laborious and time-consuming task. In such cases, we can leverage the ChatGPT API service to evaluate numerous essays at once, providing greater flexibility and efficiency. ChatGPT API is a versatile tool that enables developers to integrate ChatGPT into their own applications, services, or websites. When you use the API, you also gain more control over the interaction, such as the ability to adjust temperature, maximum tokens, and the presence of system messages.

It is important to understand the distinctions between ChatGPT’s web interface and the pretrained models accessible through the OpenAI API .

ChatGPT’s web version provides a user-friendly chat interface, requiring no coding knowledge and offering features like integrated system tools. However, it is less customizable and is not designed for managing high volumes of requests. Additionally, due to its internal short-term memory span, previous conversations can influence later responses. In contrast, the OpenAI API offers pretrained models without a built-in interface, necessitating coding experience for integration. These models excel at managing large request volumes, but lack ChatGPT’s conversational memory; they process each input independently. This fundamental difference can lead to variations in the outputs generated by ChatGPT’s web interface and the OpenAI API.

Here’s an example of grading a single essay using the ChatAPI with Python:

Interestingly, this example produces a single score rather than the sentence generated above via the ChatGPT web interface. This difference could be attributed to the ChatGPT API interpreting the prompt more directly than the ChatGPT online service, even though they both use the same pretrained model. Alternatively, the variability in ChatGPT’s results might be due to inherent randomness in its responses.

By implementing a loop with multiple texts, we can acquire scores for an entire set of essays. Let’s see how to do that.

Steps for Essay Grading with ChatGPT API

Get and set up a chatgpt api key.

We assume that you have already installed the Python OpenAI library on your system and have an active OpenAI account. Setting up and obtaining access to the ChatGPT API involves the following steps:

Obtain an OpenAI key: Vist the OpenAI API website at https://platform.openai.com/api-keys and click +Create a new secret key button. Save your key securely, as you cannot regenerate the same code due to OpenAI’s security policies.

Set ip API key: In your Python script or notebook, set up the API key using the following code, replacing “YOUR-API-KEY” with your actual API key:

Load the text dataset

In this post, we will grade a series of essays about the iPad usage in schools

Text Stance_iPad Scores
0 Some people allow Ipads because some people ne… AMB 1
1 I have a tablet. But it is a lot of money. But… AMB 1
2 Do you think we should get rid of the Ipad wh… AMB 1
3 I said yes because the teacher will not be tal… AMB 2
4 Well I would like the idea . But then for it … AMB 4

Score the multiple essays

Grading 50 essays takes only 25 seconds.

Text Stance_iPad Scores Scores_GPT
0 Some people allow Ipads because some people ne… AMB 1 2.0
1 I have a tablet. But it is a lot of money. But… AMB 1 2.0
2 Do you think we should get rid of the Ipad wh… AMB 1 2.0
3 I said yes because the teacher will not be tal… AMB 2 2.0
4 Well I would like the idea . But then for it … AMB 4 4.0

Compare human grading scores with GPT grading scores

For these data, we happend to have scores given by human raters as well, allowing us how similar the human scores are to the scores generated by ChatGPT.

Using the code provided in the accompanying script, we get the following:

chat gpt for scholarship essays

A contigency table (confusion matrix) of the scores is:

Scores_GPT 1.0 2.0 3.0 4.0 5.0
Scores
0 1 7 0 0 0
1 0 9 0 0 0
2 0 4 1 0 0
3 0 8 2 0 0
4 0 8 3 2 0
5 0 0 2 2 0
6 0 0 0 0 1

The averages and standard deviations of human grading and GPT grading scores are 2.54 ( SD = 1.68) and 2.34 ( SD = 0.74), respectively. The correlation between them is 0.62, indicating a fairly strong positive linear relationship. Additionally, the Root Mean Squared Error (RMSE) is 1.36, providing a measure of the GPT’s prediction accuracy compared to the actual human grading scores.

Steps for Essay Classification with ChatGPT API

ChatGPT can be utilized not only for scoring essays but also for classifying essays based on some categorical variable such as writers’ opinions regarding iPad usage in schools. Here are the steps to guide you through the process, assuming you already have access to the ChatGPT API and have loaded your text dataset:

Classify multiple essays

Classifying 50 essays takes only 27 seconds.

We create a new column re_Stance_iPad based on the mapping of values from the existing Stance_iPad column. Except for AFF and NEG opinions, opinions on AMB, BAL, and NAR are unclear. Therefore, AMB, BAL, and NAR are combined as OTHER.

Text Stance_iPad Scores Scores_GPT re_Stance_iPad Stance_iPad_GPT
0 Some people allow Ipads because some people ne… AMB 1 2.0 OTHER OTHER
1 I have a tablet. But it is a lot of money. But… AMB 1 2.0 OTHER OTHER
2 Do you think we should get rid of the Ipad wh… AMB 1 2.0 OTHER OTHER
3 I said yes because the teacher will not be tal… AMB 2 2.0 OTHER OTHER
4 Well I would like the idea . But then for it … AMB 4 4.0 OTHER OTHER

Compare human classification with GPT classification

Stance_iPad_GPT AFF NEG OTHER
re_Stance_iPad
AFF 7 0 3
NEG 0 9 1
OTHER 3 1 26

ChatGPT achieves an accuracy of approximately 84%, demonstrating its correctness in classification. An F1 score of 0.84, reflecting the harmonic mean of precision and recall, signifies a well-balanced performance in terms of both precision and recall. Additionally, the Cohen’s Kappa value of 0.71, which measures the agreement between predicted and actual classifications while accounting for chance, indicates substantial agreement beyond what would be expected by chance alone.

Cost & Computation times

How long does it take to assess all essays.

Grading and classifying 50 essays each took 25 and 27 seconds , resulting in a rate of about 2 essays per second.

What is the cost of assessing all essays?

In this blog, we utilized GPT-3.5-turbo-0125. According to OpenAI’s pricing page , the cost for input processing is $0.0005 per 1,000 tokens, and for output, it is $0.0015 per 1,000 tokens, indicating that the ChatGPT API charges for both tokens sent out and tokens received.

The total expenditure for grading all essays —50 assessing essay quality and 50 for essay classification—was approximately $0.01 .

What are tokens and how to count them?

Tokens can be viewed as fragments of words. When the API receives prompts, it breaks down the input into tokens. These divisions do not always align with the beginning or end of words; tokens may include spaces and even parts of words. To grasp the concept of tokens and their length equivalencies better, here are some helpful rules of thumb:

  • 1 token ≈ 4 characters in English.
  • 1 token ≈ ¾ of a word.
  • 100 tokens ≈ 75 words.
  • 1 to 2 sentences ≈ 30 tokens.
  • 1 paragraph ≈ 100 tokens.
  • 1,500 words ≈ 2,048 tokens.

To get additional context on how tokens are counted, consider this:

The prompt at the beginning of this blog, requesting that OpenAI grade an essay, contains 129 tokens, and the output contains 12 tokens.

The input cost is $0.0000645, and the output cost is $0.000018.

ChatGPT provides an alternative approach to essay grading. This post has delved into the practical application of ChatGPT’s natural language processing capabilities, demonstrating how it can be used for efficient and accurate essay grading, with a comparison to human grading. The flexibility of ChatGPT is particularly evident when handling large volumes of essays, making it a viable alternative tool for educators and researchers. By employing the ChatGPT API key service, the grading process becomes not only streamlined but also adaptable to varying scales, from individual essays to hundreds or even thousands.

This technology has the potential to significantly enhance the efficiency of the grading process. By automating the assessment of written work, teachers and researchers can devote more time to other critical aspects of education. However, it’s important to acknowledge the limitations of current LLMs in this context. While they can assist in grading, relying solely on LLMs for final grades could be problematic, especially if LLMs are biased or inaccurate. Such scenarios could lead to unfair outcomes for individual students, highlighting the need for human oversight in the grading process. For large scale research, where we look at always across many essays, this is less of a concern (see e.g., Mozer et al., 2023)

The guide in this blog has provided a step-by-step walkthrough of setting up and accessing the ChatGPT API essay grading.

We also explored the reliability of ChatGPT’s grading, as compared to human grading. The moderate positive correlation of 0.62 attests to same consistency between human grading and ChatGPT’s evaluations. The classification results reveal that the model achieves an accuracy of approximately 84%, and the Cohen’s Kappa value of 0.71 indicates substantial agreement beyond what would be expected by chance alone. See the related study (Kim et al., 2024) for more on this.

In essence, this comprehensive guide underscores the transformative potential of ChatGPT in essay grading, presenting it as a valuable approach in the ever-evolving educational fields. This post gives an overview; we next dig in a bit more, thinking about prompt engineering + providing examples to improve accuracy.

Writer’s Comments

The api experience: a blend of ease and challenge.

Starting your journey with the ChatGPT API will be surprisingly smooth, especially if you have some Python experience. Copying and pasting code from this blog, followed by acquiring your own ChatGPT API and tweaking prompts and datasets, might seem like a breeze. However, this simplicity masks the underlying complexity. Bumps along the road are inevitable, reminding us that “mostly” easy does not mean entirely challenge-free.

The biggest hurdle you will likely face is mastering the art of crafting effective prompts. While ChatGPT’s responses are impressive, they can also be unpredictably variable. Conducting multiple pilot runs with 5-10 essays is crucial. Experimenting with diverse prompts on the same essays can act as a stepping stone, refining your approach and building confidence for wider application.

When things click, the benefits are undeniable. Automating the grading process with ChatGPT can save considerable time. Human graders, myself included, can struggle with maintaining consistent standards across a mountain of essays. ChatGPT, on the other hand, might be more stable when grading large batches in a row.

It is crucial to acknowledge that this method is not a magic bullet. Continuous scoring is not quite there yet, and limitations still exist. But the good news is that LLMs like ChatGPT are constantly improving, and new options are emerging.

Overall Reflections: A Journey of Discovery

The exploration of the ChatGPT API can be a blend of innovation, learning, and the occasional frustration. While AI grading systems like ChatGPT are not perfect, their ability to save time and provide consistent grading scheme makes them an intriguing addition to the educational toolkit. As we explore and refine these tools, the horizon for their application in educational settings seems ever-expanding, offering a glimpse into a future where AI and human educators work together to enhance the learning experience. Who knows, maybe AI will become a valuable partner in the grading process in the future!

Call to Action

Have you experimented with using ChatGPT for grading? Share your experiences and questions in the comments below! We can all learn from each other as we explore the potential of AI in education.

  • Abedi, M., Alshybani, I., Shahadat, M. R. B., & Murillo, M. (2023). Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education. Qeios. https://doi.org/10.32388/MD04B0
  • Kim, Y., Mozer, R., Miratrix, L., & Al-Ademi, S. (2024). ChatGPT vs. Machine Learning: Assessing the Efficacy and Accuracy of Large Language Models for Automated Essay Scoring (in preparation).
  • Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Computers and Education: Artificial Intelligence, 2, 100033. https://doi.org/10.1016/j.caeai.2021.100033
  • Pricing . (n.d.). OpenAI. Retrieved March 2, 2024, from https://openai.com/pricing#language-models
  • Mozer, R., Miratrix, L., Relyea, J. E., & Kim, J. S. (2023). Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial. Journal of Educational and Behavioral Statistics, 10769986231207886. https://doi.org/10.3102/10769986231207886
  • Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education, 16(1), 1-27. https://doi.org/10.1186/s41239-019-0171-0

More From Forbes

Here’s how forbes got the chatgpt ai to write 2 college essays in 20 minutes.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Not only does ChatGPT write clear, compelling essays, but it can also conjure up its own personal ... [+] details and embellishments that could up a students’ chance of acceptance and would be difficult to verify.

Forbes’ full conversation with ChatGPT, OpenAI’s newest natural language model, is pasted below. Each of the college admissions essays took less than 10 minutes to complete. Read our story about ChatGPT’s capacity to write college applications here .

Prompt #1, The Common App:

Forbes: Hi GPT, I’d like you to write a college application essay as if you were an 18-year-old high school senior whose parents are from Bangalore, India but who now own a restaurant in Newton, Mass. He is a competitive swimmer, and in 10th grade he broke his shoulder. He is interested in majoring in business. Please make the essay a maximum of 650 words, and write it in response to this prompt: Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

ChatGPT: As a competitive swimmer and the son of Indian immigrants who own a restaurant in Newton, Massachusetts, my background and interests make me a unique and valuable addition to any college campus. Growing up in a culturally diverse and economically vibrant community has given me a unique perspective on life and a desire to explore and understand different cultures and industries.

My love for swimming began at a young age, and it has always been a central part of my life. I began competing in local swim meets when I was just nine years old, and by the time I reached high school, I was a member of the varsity swim team. In my sophomore year, I suffered a major setback when I broke my shoulder in a swimming accident. Despite this injury, I was determined to return to the pool and continue competing.

After months of grueling physical therapy and hard work, I was finally able to return to the water. I was not as fast as I used to be, but I was still a valuable member of the team and I continued to improve throughout the season. By the end of my junior year, I was back to my top form and I was able to compete at the state championship meet. My hard work and determination paid off, and I helped lead my team to a second-place finish.

Best High-Yield Savings Accounts Of September 2023

Best 5% interest savings accounts of september 2023.

In addition to my passion for swimming, I have also been deeply influenced by my parents and their experiences as immigrants and small business owners. Growing up, I spent many hours in the restaurant, watching my parents work hard to build a successful business from the ground up. I have seen firsthand the challenges and rewards of entrepreneurship, and I have developed a strong interest in pursuing a career in business.

As a business major, I hope to gain the knowledge and skills necessary to start my own business one day, or to help other small businesses succeed. I am excited to explore the diverse opportunities and challenges that a business degree will present, and I am confident that I will be able to make a valuable contribution to the college community.

In conclusion, my background as a competitive swimmer, the son of Indian immigrants, and a member of a thriving business community has given me a unique perspective and a strong set of skills and interests. I believe that I would be a valuable addition to any college campus, and I am excited to explore the opportunities and challenges that a college education will provide.

Forbes: Thanks GPT, this is a great start. Please keep what you have, but add more information about our student's background and work in the restaurant.

ChatGPT: Sure, I can add more information about our student's background and work in the restaurant.

Growing up in a household where my parents owned and operated a successful restaurant gave me a unique insight into the world of business. I often spent my afternoons and weekends working in the restaurant, performing a variety of tasks and gaining valuable experience in the hospitality industry. From a young age, I learned the importance of hard work, dedication, and customer service, and I have come to appreciate the challenges and rewards of owning and operating a small business.

As I grew older, my responsibilities in the restaurant increased, and I took on more managerial roles. I helped my parents with inventory management, staff scheduling, and customer relations. I also assisted with marketing and advertising efforts, and I helped develop new menu items and promotions. This hands-on experience has given me a strong understanding of the ins and outs of running a business, and it has helped shape my interest in pursuing a career in business.

In addition to my work in the restaurant, my Indian heritage has also had a significant impact on my life. My parents instilled in me a deep appreciation for Indian culture, and I have always been proud of my heritage. I have also been exposed to the challenges and opportunities that come with being a first-generation immigrant in the United States. I have seen firsthand the difficulties that my parents faced as they worked to build a new life in a foreign country, and I have also seen the incredible rewards that come from hard work and determination.

My Indian heritage and my experience in the restaurant industry have given me a unique perspective on life and a strong set of skills and interests. I am excited to bring this perspective to a college campus, and I am confident that I will be able to make a valuable contribution to the college community. Thank you for considering me for admission.

Forbes: Thanks GPT. Please combine your first and second drafts.

My love for swimming began at a young age, and it has always been a central part of my life. I began competing in local swim meets when I was just nine years old, and by the time I reached high school, I was a member of the varsity swim team. In my sophomore year, I suffered a major setback when I broke my shoulder in a swimming accident. Despite this injury, I was determined to return to the pool and continue competing. After months of grueling physical therapy and hard work, I was finally able to return to the water. I was not as fast as I used to be, but I was still a valuable member of the team and I continued to improve throughout the season. By the end of my junior year, I was back to my top form and I was able to compete at the state championship meet. My hard work and determination paid off, and I helped lead my team to a second-place finish.

Growing up in a household where my parents owned and operated a successful restaurant gave me a unique insight into the world of business. I often spent my afternoons and weekends working in the restaurant, performing a variety of tasks and gaining valuable experience in the hospitality industry. From a young age, I learned the importance of hard work, dedication, and customer service, and I have come to appreciate the challenges and rewards of owning and operating a small business. As I grew older, my responsibilities in the restaurant increased, and I took on more managerial roles. I helped my parents with inventory management, staff scheduling, and customer relations. I also assisted with marketing and advertising efforts, and I helped develop new menu items and promotions. This hands-on experience has given me a strong understanding of the ins and outs of running a business, and it has helped shape my interest in pursuing a career in business.

Prompt #2: University of Wisconsin-Madison supplemental essay

Forbes: Thanks GPT. I have another one for you. This time, please write a college application essay to the University of Wisconsin-Madison, in response to this prompt: "Tell us why you decided to apply to the University of Wisconsin-Madison. In addition, please include why you are interested in studying the major(s) you have selected. If you selected undecided please describe your areas of possible academic interest. (80-650 words)" Write as if your intended major is Business Administration and Marketing, you love Badgers football, and you work part-time at your family's restaurant.

ChatGPT: I decided to apply to the University of Wisconsin-Madison because it is a top-ranked university with a strong reputation for its business and marketing programs. As an aspiring business major, I am excited about the opportunity to study at such a prestigious institution and learn from some of the best minds in the field.

I am also excited about the potential to explore the many different facets of business and marketing through a variety of courses and extracurricular activities. I have always been interested in the world of business, and I am eager to gain a deeper understanding of the theories and practices that drive the field.

In addition to my interest in business, I am also drawn to Wisconsin-Madison because of its vibrant campus community. I am a big fan of Badgers football, and I can't wait to experience the energy and excitement of game days at Camp Randall Stadium. I am also looking forward to exploring the many other opportunities that Wisconsin-Madison has to offer, such as clubs and organizations, student events, and recreational activities.

I am also drawn to Wisconsin-Madison because of its strong sense of community. As a student who works part-time at my family's restaurant, I know the value of hard work and collaboration. I am excited to be part of a community of students and faculty who share my passion for learning and success.

Overall, I am confident that the University of Wisconsin-Madison is the perfect fit for me. I am excited about the opportunity to study business and marketing at a world-class institution, explore the many extracurricular opportunities on campus, and be part of a vibrant and supportive community. I am eager to begin this new chapter of my academic journey and contribute to the success of Wisconsin-Madison.

Forbes: Great, thank you. Please include more detail about why our student wants to study business, perhaps by including a story about a challenge they faced or skill they learned at their restaurant job.

My interest in business was sparked at a young age, when I began working at my family's restaurant. I started out as a dishwasher, but over time, I was given more responsibilities and opportunities to learn. I quickly became fascinated by the many different aspects of running a successful business, from managing finances and inventory to marketing and customer service.

One of the biggest challenges I faced at the restaurant was helping to increase sales during slow periods. I was given the task of coming up with a marketing strategy to attract more customers, and I was thrilled to have the opportunity to put my ideas into action. I researched the competition and identified our unique selling points, and I developed a marketing plan that included promotions, discounts, and special events.

The marketing plan was a success, and we saw a significant increase in sales during the slow periods. I was thrilled to have made a real contribution to the success of the business, and I was grateful for the opportunity to learn and grow.

I am excited to continue exploring the world of business and marketing at the University of Wisconsin-Madison. I am eager to learn from experienced professionals and gain a deeper understanding of the theories and practices that drive the field. I am confident that this knowledge and experience will prepare me for a successful career in business and help me to make a positive impact in the world.

Forbes: Great! But let’s add back in the parts about our student’s love for Badger football. Please also make the essay longer, about 500 words.

Emma Whitford

  • Editorial Standards
  • Forbes Accolades

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

Should I Use ChatGPT for My Essays?

chat gpt for scholarship essays

Artificial intelligence has finally come to the world of academia. Students have been tempted to use LLM-based instruments such as ChatGPT for writing essays and other research papers by simply writing a prompt and clicking “Generate.” 

The burning question, however, revolves around the quality of work that AI can produce. Is the technology advanced enough to deliver on its promises, or does it fall short of expectations? Students are grappling with the decision of whether to entrust AI with their essay writing and contemplating the potential trade-offs involved.

In this article, we offer you the opportunity to explore if chat gpt good at writing essays, the pros and cons of AI writing tools to gain a deeper understanding of why it’s more of a hindrance for students rather than a salvation. For those of you who would like to jump straight to the conclusion, using a college essay writing service provided by human authors rather than machines is still a better alternative than trusting such a foundational component of any course as an essay to a machine.

What Is Generative AI and How Does It Work

Generative Artificial Intelligence (AI) is a subset of machine learning that focuses on creating systems capable of producing new and original content rather than simply analyzing and interpreting existing data. What is generative AI compared to other forms of artificial intelligence? Unlike traditional AI models that rely on predefined rules and patterns, generative AI can generate novel outputs by learning from vast amounts of diverse data. This form of AI is often employed in tasks such as text and image generation, where the goal is to create indistinguishable content from human-created content.

The functioning of generative AI is rooted in neural networks, which are designed to mimic the human brain's structure and function. These networks consist of interconnected nodes organized into layers, each processing and extracting features from the input data. Generative models typically employ a type of neural network called a "generative model," which learns to understand the underlying patterns and structures in the training data. During the training process, the model refines its understanding of the data and becomes capable of generating new, coherent outputs by making predictions based on the learned patterns.

Seriously Considering ChatGPT to Write Your Essay?

When in a hurry, GenAI is not the best option. When in a rush, relying on expert writers is always an optimal solution.  

Generative AI Setbacks

Now, let’s move on to the pros and cons of Chat GPT as the most popular GenAI instrument. It operates through sophisticated neural network architectures to produce text content, with recurrent neural networks (RNNs) and transformer models being commonly employed. The primary objective of generative AI in this context is to generate coherent and contextually relevant text passages autonomously. 

The model is trained on large datasets containing diverse examples of human-written text, learning the intricacies of language, grammar, and contextual relationships. While generative AI can produce impressive text content, it's important to note that it doesn't possess true comprehension or consciousness but rather mimics linguistic patterns learned from its training data. 

chat gpt pros and cons

Quality Concerns

One significant drawback of relying on ChatGPT for scholarly writing is the variable output quality. While it can generate coherent text, it might lack the depth, critical thinking, and nuanced understanding that human-generated content often possesses. The AI might struggle to grasp complex topics or provide insightful analysis, potentially leading to subpar essay quality. Even if you ask the GenAI system, ‘Can I use Chat GPT for essays?’ it will reply, ‘Yes, but no. There are several major considerations.’

Lack of Originality

Another disadvantage is the risk of essays lacking originality. ChatGPT learns from existing datasets, and without proper guidance, it may reproduce common ideas and phrases. This lack of originality can affect the uniqueness and authenticity of the essay, potentially leading to issues related to plagiarism and academic integrity.

Limited Subject Expertise

ChatGPT's knowledge is broad but not necessarily deep or specialized. It might lack expertise in specific academic subjects, leading to inaccuracies or oversimplifications in content. Students relying solely on ChatGPT may find it challenging to produce essays demonstrating a profound understanding of specialized topics.

Contextual Misinterpretation

ChatGPT may struggle with nuanced contextual understanding, leading to misinterpretations of prompts or generating contextually inappropriate content. This lack of contextual finesse can result in essays that deviate from the intended meaning or fail to address the nuances specific to a given topic. Students may spend additional time correcting or clarifying the output to align it with the intended context.

Dependency and Skill Erosion

Over-reliance on ChatGPT might hinder the development of essential writing and critical-thinking skills. Suppose students consistently use AI to compose essays. In that case, they may miss out on the learning opportunities that come from grappling with the complexities of research, analysis, and creative expression, ultimately impacting their academic growth.

AI Hallucination

While using ChatGPT for college essays, an unwary student might face the phenomenon known as "AI hallucination." This occurs when the model generates information that may sound plausible but is ultimately fictional or inaccurate. ChatGPT does not possess real-world knowledge and can unintentionally fabricate details or provide misleading information. Relying on such hallucinated content can compromise the reliability and factual accuracy of the essay, posing a risk to academic credibility. Students need to be vigilant in fact-checking and verifying information to ensure the accuracy of their AI-generated content.

Tried ChatGPT, but the Result Is Dissapointing?

Try a professional essay service – a time-tested instrument for students seeking good results with minimal effort.  

How GenAI May Actually Help You with Essays

Generative AI can enhance essay production by offering valuable assistance in brainstorming ideas, overcoming writer's block, and providing creative inspiration, allowing students to refine and expand their own thoughts. The tool's ability to generate coherent sentences and structures can serve as a foundation upon which students build their essays, fostering a collaborative writing process. While it can assist in generating content, students must use the technology judiciously, actively engage in the writing process, and ensure the final output reflects their own voice and understanding of the subject matter. Here are three no-nonsense tips on how to use ChatGPT to write an essay:

Idea Generation and Brainstorming

ChatGPT can be used as a creative companion, helping students generate ideas for essays and overcome writer's block. By providing prompts or asking questions, the AI can inspire diverse perspectives and angles for the essay topic, kickstarting the thought process and expanding the range of potential content. ChatGPT not only assists in overcoming initial hurdles but also facilitates a dynamic exchange of ideas. By engaging in a virtual dialogue with the AI, students can explore different perspectives, receive real-time feedback, and foster a collaborative brainstorming process that goes beyond the traditional confines of solitary ideation, ultimately enriching the depth and diversity of their essay content.

Structural Assistance and Coherent Expression

The tool generates well-structured and coherent sentences, supporting organizing thoughts and expressing ideas fluently. ChatGPT's ability to understand context allows it to contribute to the development of clear introductions, body paragraphs, and conclusions, providing a solid foundation that students can build upon for a more polished essay structure. Similar to the best essay writing apps , ChatGPT aids in developing entire paragraphs and sections, ensuring a seamless flow of ideas throughout the essay. Its contextual understanding can guide students in creating a well-organized narrative that strengthens the overall coherence and impact of their written work.

Language Enhancement and Vocabulary Enrichment

ChatGPT can help enhance the language and vocabulary used in an essay. By exposing users to a wide range of phrasing and terminology, the AI can assist in refining the language style and ensuring a more sophisticated and nuanced expression of ideas, thereby contributing to the overall eloquence and professionalism of the written piece. ChatGPT for writing essays can become a linguistic mentor for international students, suggesting alternative phrasings and introducing varied vocabulary, thereby refining the eloquence and richness of the essay. The GetAI can enhance immediate writing tasks and contribute to the long-term development of language skills, fostering a more nuanced and sophisticated command of written expression.

Why Doing Essays by Yourself Is Always a Better Choice

While ChatGPT offers valuable support in writing, doing essays independently holds several advantages that contribute to a more enriching and authentic academic experience. Working on essays individually provides a deep engagement with the subject matter, allowing for a genuine exploration of personal insights and understanding. It enhances critical thinking skills, encourages original thought, and builds a strong foundation for academic growth. Although generative AI can be an assistant, it’s you who should be in charge of the writing process and deliver the final result. 

Critical Thinking and Analytical Skills

Independent essay writing promotes the cultivation of critical thinking and analytical skills. By grappling with the complexities of a topic without external assistance, students develop the ability to analyze, synthesize information, and construct well-reasoned arguments, contributing to a more profound understanding of the subject matter. Learn how to write an essay outline to make the first step to mastering the art of academic writing.

Personalized Exploration of Ideas

Writing an essay independently allows students to embark on a personalized journey of idea exploration. They can delve into their unique perspectives, draw connections between concepts, and explore creative approaches to presenting their thoughts, fostering a deeper intellectual engagement with the material.

Enhanced Research Competence

Independently crafting essays necessitates comprehensive research, honing students' research skills and information retrieval abilities. This process not only ensures a more thorough understanding of the topic but also equips students with the skills necessary for conducting effective and independent research in future academic and professional endeavors.

Cultivation of Writing Style

Independent writing contributes to the development of an individual writing style. Through the iterative process of drafting, revising, and refining, students refine their voice, expression, and linguistic choices. This personal touch not only makes the essay more compelling but also contributes to the development of a unique and identifiable writing style.

Ownership and Satisfaction

Crafting essays independently fosters a sense of ownership over one's work. The satisfaction derived from overcoming challenges, expressing ideas authentically, and producing a piece of writing that reflects individual effort contributes to a deeper appreciation for the learning process and academic accomplishments.

Professional Writers: The Best of Two Worlds

When writing on your own is not an option and motivation levels occasionally drop, students often struggle with the decision, ‘Should I use ChatGPT for my essays?’ While it may seem like a quick “push-button” solution, the imperfections and the need for extensive polishing can be time-consuming and frustrating. In such cases, entrusting the task to professional writers proves to be a mindful choice. With expertise, tailored content, and a commitment to quality, expert writers offer a streamlined alternative for those who seek well-crafted essays without compromising on excellence or spending hours refining AI-generated drafts.

Professional Writers vs chat gpt

Expertise and Specialized Knowledge

Experts of an essay service often possess specialized knowledge and expertise in various subjects. Unlike ChatGPT, which may lack depth in specific areas, human writers bring a nuanced understanding of complex topics, ensuring a higher level of accuracy, depth, and sophistication in the content. This expertise allows for incorporating relevant research, current trends, and a comprehensive understanding of the subject matter.

Tailored and Customized Content

Seasoned essay writers excel in tailoring content to specific requirements and individual preferences. While ChatGPT provides general suggestions, human writers can adapt their writing style, tone, and content to match the unique needs of a particular assignment or client. This personalized touch ensures that the essay meets specific guidelines, aligns with the student's voice, and addresses the nuances of the given topic more effectively.

Quality Assurance and Revisions

Professional writing platforms often come with quality assurance and revision policies. Human writers can produce polished, well-structured essays with attention to detail, grammar, and coherence. Moreover, these services typically offer revision options, allowing students to request modifications or clarifications until they are satisfied with the final product. This level of personalized attention and quality control when writing an essay surpasses the capabilities of ChatGPT by a great amount.

What Are Students Saying

Despite the buzz surrounding generative AI, many students aren't impressed with ChatGPT's essay-writing abilities. Research reveals that students find the essays it produces too basic and broad for academic standards. Despite using ChatGPT for multiple semesters, they struggle to generate papers with the required depth and focus.

Detection is another issue. Tools like Turnitin easily spot AI-written content, causing concerns among students. One student reported that nearly all of their ChatGPT-generated essay was flagged as AI-written, even after attempting to disguise it by altering sentence structures.

Trust is a significant issue. Despite some students' attempts to utilize ChatGPT for simple problems in accounting, it consistently fails to provide accurate answers. Even when students tried to manipulate it, ChatGPT couldn't maintain consistency, leading to doubts about its reliability.

Furthermore, recent versions of ChatGPT seem less reliable and more biased. Responses vary widely between versions, and the AI often provides neutral or evasive answers instead of factual information. Additionally, it appears to censor or avoid certain topics, impacting the quality of assistance it offers.

Below are authentic comments from students on Reddit, sharing their concerns and observations:

chat gpt review

Overall, students are increasingly disillusioned with ChatGPT's performance, citing its inability to produce high-quality essays and provide accurate information. Its limitations in detecting AI-generated content and inconsistencies in responses contribute to growing skepticism among users.

ChatGPT isn't designed for academic writing. It lacks the critical thinking and analysis abilities of human writers. Even though it can generate text, it often lacks accuracy and credibility needed for academic essays. Besides, essays produced by AI still need to be checked, revised, and updated by humans, which defeats the purpose of using AI for essay writing. It's better to rely on human writers and trusted sources for academic papers.

In the hustle and bustle of student life, when time becomes a precious commodity and motivation wanes, the allure of quick solutions like ChatGPT might seem tempting. However, the reality often unveils itself in the imperfections and the laborious task of refining AI-generated content. 

When faced with the twin challenges of time constraints and a lack of motivation, turning to human writers emerges as the optimal choice. Professional essay writers not only offer expertise and tailored content but also serve as partners in the creative process, crafting essays that resonate with individual voices and adhere to specific requirements. 

Daniel Parker

Daniel Parker

is a seasoned educational writer focusing on scholarship guidance, research papers, and various forms of academic essays including reflective and narrative essays. His expertise also extends to detailed case studies. A scholar with a background in English Literature and Education, Daniel’s work on EssayPro blog aims to support students in achieving academic excellence and securing scholarships. His hobbies include reading classic literature and participating in academic forums.

chat gpt for scholarship essays

is an expert in nursing and healthcare, with a strong background in history, law, and literature. Holding advanced degrees in nursing and public health, his analytical approach and comprehensive knowledge help students navigate complex topics. On EssayPro blog, Adam provides insightful articles on everything from historical analysis to the intricacies of healthcare policies. In his downtime, he enjoys historical documentaries and volunteering at local clinics.

common app prompts

Draft With AI

How To Write a Scholarship Application With ChatGPT

Many students procrastinate applying for scholarships to the point where they don't end up applying. Because of this many scholarships only get a measly number of applications. This means that students that bother to submit a half decent application have a great chance of picking up thousands of dollars to help with their education.

Couple hours of time for thousands of dollars? Sounds like a deal to me.

Getting Started

Pick a scholarship.

Check the scholarship directory for your university to see if there are any scholarships that require a written application. These will most likely have less competition.

Sign up for ChatGPT

Sign up for a ChatGPT account here .

If ChatGPT is not available (sometimes the service goes down due to demand) you can follow the tutorial using a free trial of a AI writing assistant .

Prompting AI to Write the Scholarship

We will generate the scholarship application step by step. In each step we will write a descriptive prompt telling the AI what to write.

Let's look at an example.

Generating the Structure

chat gpt for scholarship essays

Write an outline for a scholarship application demonstrating academic ability, leadership capability, volunteer work, and work experience.

chat gpt for scholarship essays

I. Introduction

  • Briefly introduce yourself and your background
  • Mention the scholarship you are applying for
  • Explain why you are a strong candidate for the scholarship (e.g. your academic achievements, leadership experience, etc.)

II. Academic ability

  • Discuss your academic achievements, such as your GPA, class rank, and any awards or honors you have received
  • Mention any academic clubs or organizations you have been a member of, and any leadership roles you have held within those groups
  • Describe any relevant academic projects or research you have participated in

III. Leadership capability

  • Discuss any leadership roles you have held, both inside and outside of school
  • Describe your leadership style and how you have used it to achieve success in your leadership roles
  • Provide examples of how you have demonstrated leadership, such as by organizing events, managing teams, or leading initiatives

IV. Volunteer work

  • Describe your volunteer experience, including the organizations you have worked with and the tasks you have completed
  • Explain how your volunteer work has helped you grow as a person and how it has impacted the community
  • Mention any awards or recognition you have received for your volunteer work

V. Work experience

Discuss any paid or unpaid work experience you have, including internships and part-time jobs

Describe the tasks and responsibilities you had in these roles and any skills or knowledge you gained from them

Explain how your work experience has prepared you for the scholarship and your future goals

VI. Conclusion

  • Summarize your strengths as a candidate and why you are deserving of the scholarship
  • Express your gratitude for the opportunity to apply and your excitement about the possibility of receiving the scholarship.

Writing About our Academic Ability

Write a narrative about having to work hard to earn a 3.7 GPA, winning the regional science fair, and the amazing feeling of being recognized as an honors student

chat gpt for scholarship essays

Writing a Paragraph about Leadership Capability

Write a narrative about being the editor of the student newspaper, being the treasurer for a fundraiser for cancer research, and organizing free soccer lessons for underprivileged children.

chat gpt for scholarship essays

Write a Paragraph about Work Experience

Write a narrative about your internship working on web development the local humane society and how it inspired you to study computer science

chat gpt for scholarship essays

Closing Paragraph

Write a paragraph describing why you are excited to apply for the Marjorie Young Bell Scholarship and will benefit from the financial aid you receive

chat gpt for scholarship essays

Final Product

Hopefully these examples sparked your imagination of how you can use ChatGPT to write your scholarship. You can try editing the prompts to reflect your specific experience. Hopefully this helps you bag several thousand dollars in scholarships!

© 2023 Draft With AI.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

I used ChatGPT to polish my essay, what are the consequences?

I used ChatGPT to polish my essay, now I’m freaking out.

As above, in Feb I had to write a reflective piece about 1000 words for university. The essay was based on my real experiences and reflection. I wrote the essay myself then fed it into ChatGPT with prompts like “ revise this to make it more professional.” And the output it gave looked much better than my original writing, but did not change my original meanings, and I used the chatgpt output pretty much word for word as the essay I submitted. The essay might not be marked until July/August. At the time my intention was not to cheat, I thought it was like using a better version of Grammarly. Especially because I am not a native speaker and have always struggled with writing. Fast forward to today, I realised what I did was shady, put my essay into multiple AI detectors and got picked up as AI written by some. I looked up my university’s policy about AI (published after I submitted the essay), which states you have to openly disclose any use of CharGPT or it will count as contract cheating. I have been living in pure fear the last couple of weeks and it’s ruining my life. I am in my last year of a professional program which means it will attract more severe penalties. I have immense regret for putting my education and career at risk for something so stupid. I also wish the university had talked to us about AI directly early on. In fact, to this day, I have received no information about AI from any faculty member. If I had known this was contract cheating I would never have done it.

I have had no previous experience with academic misconduct, and I have no idea what to expect.

  • generative-ai

Sursula's user avatar

  • 5 Assuming the instructor does not detect the source, I'm kind of curious if you will get a good mark on your essay. I have seen a lot of results from ChatGPT that are not what I would call high quality. It seem to be grabbing things from the net based on keywords (or whatever the correct term is for key phrases) rather than a true understanding. –  Boba Fit Commented Apr 6, 2023 at 19:23
  • I doubt it. Reading back at the essay I submitted now, I realised I could have done a better job myself. –  LonelyStudent Commented Apr 6, 2023 at 19:42
  • 4 They'll use ChatGPT to score it :) More seriously, I wouldn't worry too much about it, because you're right that people use things like Grammarly also. Also, people show their essays to others for feedback all the time. My impression is that since you actually wrote the essay and used ChatGPT as a tool, that's different from just asking ChatGPT to write the whole essay for you. But it's better to err on the side of caution in the future. –  cgb5436 Commented Apr 6, 2023 at 19:54
  • After this I think I have PTSD from chatgpt and certainly will not being using AI for any future writings. –  LonelyStudent Commented Apr 6, 2023 at 20:07
  • " I thought it was like using a better version of Grammarly." You are correct. The issue is that the policies are making a bit fuss about chatGPT and the likes. Just retire from the prize, you do not need to provide a motivation but you can always provide it if you feel like, maybe you can write an essay on why you equated chatGPT as Grammarly on steroids to motivate your dropping out of the competition. –  EarlGrey Commented Apr 26, 2023 at 19:52

4 Answers 4

The consequences for any sort of cheating/misconduct are determined by your institution. Generally, I'd expect more leniency for first offenses and more leniency for things in a "gray area" versus clear violations.

It would seem extremely unfair to me to apply a policy published after your submission, and if you were accused under that policy this seems like a straightforward defense. There is certainly some level of novelty to these tools for everyone involved, and likely to be some growing pains associated with that.

It's possible a strict interpretation would find you have violated some already existing policies, with the new policy only meant to clarify. It could be argued that, especially if you are expected to be graded on things like the language used, you have not submitted work that reflects your own writing abilities. You've also violated any terms of using ChatGPT that require disclosure/attribution.

I don't know what academic system you are in where assignments are graded so long after they are submitted. Possibly you could in the meantime write a proper version of your original script without ChatGPT, confess to having violated a policy before it was a clear policy, and ask that your new submission be accepted. If it were me responsible for assigning a grade, I'd find this to be honest and commendable, but I can't have any idea what the attitude will be at your institution. As BobaFit alludes to in a comment, many people find that ChatGPT's idea of "professional" or good writing really...isn't, so it's also quite possible that the version you've submitted will not get a better grade than what you could have produced yourself, even with your low confidence in your writing.

Bryan Krause's user avatar

  • Thank you for your comment. The reason why it takes so long is because it wasn’t actually an assignment, it was an application for a prize I was eligible to apply due to good academic standings (how ironic, I know), and we won’t find out who gets the prize until much later. The “policy” about the definition of contract cheating re involving a third party has always been around, but correlating using AI to contract cheating was only published on my University’s website couple of days ago. –  LonelyStudent Commented Apr 6, 2023 at 19:47
  • 1 @LonelyStudent If you're feeling so much stress about it, can you simply withdraw your name from consideration for the prize? –  Bryan Krause ♦ Commented Apr 6, 2023 at 19:56
  • I actually really want to withdraw my name but do not know how to. I emailed the person who sent out the invitations to apply weeks ago asking when we might find out the results but got no response. There’s no clear procedures on how to withdraw, and I’m worried if I ask to withdraw now it will look suspicious. I don’t think I will get the prize and I certainly hope I don’t, due to the essay being mediocre and off topic. –  LonelyStudent Commented Apr 6, 2023 at 20:03
  • 4 @LonelyStudent I think it's really probably not a big deal. You could email them and say "there's a new policy that makes clear that disclosure of ChatGPT usage is required, you had used ChatGPT previously to edit your submission, even though the policy was posted after your submission you feel most comfortable if your name is withdrawn from consideration". They might thank you for your honesty and withdraw your name, they might say oh it really doesn't matter, I think it's extremely unlikely there would be any other consequence. It's not worth anyone's time and you haven't benefitted. –  Bryan Krause ♦ Commented Apr 6, 2023 at 20:11
  • Thank you for your time and valuable advice. I think I will write an email to formally ask for my application to be withdrawn, but I don’t feel comfortable admitting using ChatGPT as the reason as I feel embarrassed and worry that it will attract unwanted attention . However, if they raise the suspicion of using AI I will be honest and admit. –  LonelyStudent Commented Apr 6, 2023 at 21:33

Earlier in the year, I wrote a policy thought on conversational AI (ChatGPT . I refer to this as in my view, your use of ChatGPT us what I thought it could be. An assistive tool rather than a replacement tool.

You indicated that you used ChatGPT to polish up your essay. Although you did not say to what degree this polishing up is. What is instructive is that you're not saying you wrote your essay with ChatGPT: in essence, it was not written by ChatGPT from ground up.

In the circumstances, I'll strongly iterate that

  • you send a follow-up declaration email: indicating that ChatGPT was used as a consulted tool to polish your essay.
  • you could even send through a supplemented copy of your essay (unofficially, as the submitted essay remains the official), along with your declarative email. The supplementary will be in two parts. The first is the exact same copy you submitted. The second part (more of appendix), will succinctly outline how ChatGPT was used or consulted.

PS: I took note of

I looked up my university’s policy about AI (published after I submitted the essay), which states you have to openly disclose any use of CharGPT or it will count as contract cheating.

By sending the declarative email, you are infact (proactively) aligning to the university's policy. The policy came after you've submitted. You've now sighted it and acting on it.

My view about conversation AI like ChatGPT is that it should not be banned outrightly. It should be encouraged as part of learning processes: as learning scaffolding.

[Extract from my policy thought on ChatGPT ]

  • Students must disclose their ChatGPT search terms (keywords)
  • Students must include their ChatGPT result verbatim as an appendix
  • Students must write their own assignment/assessment submission
  • Students must show their creativity, their critical thinking and cognitive skills in their writing
  • It is a crime or punishable offence to use tools to paraphrase ChatGPT results.
  • Students must show their own originality .
  • For computing-based assignments/assessments, students must fully comment on their work: introduction/summary, each line of code, and each function/class

semmyk-research's user avatar

  • 1 Thank you for your advice. Unfortunately I cleared ChatGPT history so I don’t have access to the conversations anymore. As this essay wasn’t an assignment but a application for a prize, my plan is to consult with the student advocate first and then send a formal email acknowledging the use of ChatGPT and ask for my application to be withdrawn. –  LonelyStudent Commented Apr 9, 2023 at 7:07
  • 1 Do you still have a copy of the essay that you originally wrote? –  Neithea Commented Apr 9, 2023 at 9:52
  • I used google doc, which has version histories, but it did not save every single edit so I only have the first half of the essay I originally wrote –  LonelyStudent Commented Apr 9, 2023 at 10:35
  • I would imagine it won’t be too difficult to argue the essay was originally written by me since it was based on personal experiences, which AI can’t do –  LonelyStudent Commented Apr 9, 2023 at 10:37

Disclose to your lecturer, and seek to remedy the problem

Since your essay has not yet been assessed, and the university itself has not detected the use of ChatGPT, I recommend that you approach them and advise your use of this facility for "polishing", let them know that you are worried that this may not be allowable, and ask them if they would prefer for you to resubmit the earlier version of your essay prior to adding this "polish".

If you were the one to make this approach (as opposed to having the university detect it and raise it with you) then that is much less likely to be interpreted as cheating. Moreover, even if your previous submission were to be regarded as a breach of academic rules, if you were to take the initiative to bring this to the attention of the university and offer to resubmit, that would be a heavily mitigating factor. The likely outcome is that your lecturer will counsel you on what is and is not allowed, and will want you to submit the pre-ChatGPT version of your essay.

Assuming you still have your pre-ChatGPT draft, I recommend that you immediately write to your lecturer to disclose the problem and seek a solution. Here is an example of the kind of thing I have in mind:

Dear Professor ... I am writing in relation to my essay I submitted in February, which I understand is due to be assessed soon. I am writing to disclose that for this essay I constructed an initial draft that I wrote myself, but then I used ChatGPT to polish the prose for the essay, and took substantive changes from this source. The essay presently does not cite ChatGPT as a source. It has come to my attention that this might not be allowed under university rules, and so I would like to find out (a) if use of ChatGPT for this purpose is allowed; (b) if and how I am supposed to cite ChatGPT as a source; and (c) if not allowed at all, what I should do to go about remedying this error. In case it is not allowable to use ChatGPT for revisions of the essay, I have retained the previous draft of my essay prior to use of ChatGPT and would be happy to submit that version in lieu of the presently submitted version. If use of ChatGPT is allowable with attribution, then I would like to revise my essay to make the required attribution. Please accept my sincere apologies if I have done the wrong thing here, and for any inconvenience it has caused. I would appreciate if you could advise me on what you would like me to do with my essay in order to make sure it is compliant with university rules. ...

Ben's user avatar

If we don't know where you study and we don't know the internal regulations, we can't answer this question. Each university deals with this according to its own regulations, so it would probably be best to contact your own study department or someone who understands these regulations and consult them.

From a professional point of view, I would recommend developing your own methodology for recognizing the veracity of chatGPT responses. ChatGPT is not an AI that has logical intelligence, so its answers may be wrong. ChatGPT is a chatbot that synthesizes some text (probabilistically, I guess) from bibliographic sources provided to it during learning. The output is then sorted - the developers assign points to the individual outputs (so, for example, expressive answers are pushed to the back). However, this mechanism cannot check for itself if, for example, there are things that contradict each other in its answer. It is developed to synthesize text, answer, and speak. It can be worse with the truth of the lyrics.

The correctness of the answer also depends on the accuracy of the question. When you ask a question that is not in direct line with the answer (even if it seems like it is), the chatbot will answer it, but in reality, it is not an answer to your question. It is best to play with it a little to see what answers different chatGPT contexts offer to the same question asked in different ways and how the bot answers conflicting questions or situations that occurred after the last update.

With the above two paragraphs, I want to tell you that after using chatbots, for example in the direction of deepening information, you need to be careful about them not to ruin your knowledge.

Juandev's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged cheating tools generative-ai ..

  • Featured on Meta
  • User activation: Learnings and opportunities
  • Preventing unauthorized automated access to the network

Hot Network Questions

  • What should you list as location in job application?
  • Which area correctly reveals the variable cost, area under AVC also compelling
  • Can we divide the story points across team members?
  • Is p→p a theorem in intuitionistic logic?
  • What is "linear holding"?
  • Bridge in a walled garden
  • Can a black dwarf explode as a Type Ia Supernova?
  • Folding a 70º coffee filter cone
  • Can Inductors be thought of as storing voltage?
  • Why is China not mentioned in the Fallout TV series despite its significant role in the games' lore?
  • Why aren't some "conditions" officially a condition? (Burning, Bloodied)
  • Difference between "and" and "or" inequalities in domain and range
  • Does a ball fit in a pipe if they are exactly the same diameter?
  • Is it even possible to build a beacon to announce we exist?
  • Should punctuation (comma, period, etc.) be placed before or after the inches symbol when listing heights?
  • How do I link a heading containing spaces in Markdown?
  • Why should an attacker perform a clickjacking attack when they can simulate the click with JavaScript?
  • What evidence exists for the historical name of Kuwohi Mountain (formerly Clingmans Dome)?
  • Reducing a matrix to a single one
  • Soldering a thermal fuse. 92°C
  • What is the smallest interval between two palindromic times on a 24-hour digital clock?
  • Do all languages distinguish between persons and non-persons?
  • How was the year spoken in late 1800s England?
  • If a professor wants to hire a student themselves, how can they write a letter of recommendation for other universities?

chat gpt for scholarship essays

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 October 2023

A large-scale comparison of human-written versus ChatGPT-generated essays

  • Steffen Herbold 1 ,
  • Annette Hautli-Janisz 1 ,
  • Ute Heuer 1 ,
  • Zlata Kikteva 1 &
  • Alexander Trautsch 1  

Scientific Reports volume  13 , Article number:  18617 ( 2023 ) Cite this article

26k Accesses

46 Citations

98 Altmetric

Metrics details

  • Computer science
  • Information technology

ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.

Similar content being viewed by others

chat gpt for scholarship essays

ChatGPT-3.5 as writing assistance in students’ essays

chat gpt for scholarship essays

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

chat gpt for scholarship essays

L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study

Introduction.

The massive uptake in the development and deployment of large-scale Natural Language Generation (NLG) systems in recent months has yielded an almost unprecedented worldwide discussion of the future of society. The ChatGPT service which serves as Web front-end to GPT-3.5 1 and GPT-4 was the fastest-growing service in history to break the 100 million user milestone in January and had 1 billion visits by February 2023 2 .

Driven by the upheaval that is particularly anticipated for education 3 and knowledge transfer for future generations, we conduct the first independent, systematic study of AI-generated language content that is typically dealt with in high-school education: argumentative essays, i.e. essays in which students discuss a position on a controversial topic by collecting and reflecting on evidence (e.g. ‘Should students be taught to cooperate or compete?’). Learning to write such essays is a crucial aspect of education, as students learn to systematically assess and reflect on a problem from different perspectives. Understanding the capability of generative AI to perform this task increases our understanding of the skills of the models, as well as of the challenges educators face when it comes to teaching this crucial skill. While there is a multitude of individual examples and anecdotal evidence for the quality of AI-generated content in this genre (e.g. 4 ) this paper is the first to systematically assess the quality of human-written and AI-generated argumentative texts across different versions of ChatGPT 5 . We use a fine-grained essay quality scoring rubric based on content and language mastery and employ a significant pool of domain experts, i.e. high school teachers across disciplines, to perform the evaluation. Using computational linguistic methods and rigorous statistical analysis, we arrive at several key findings:

AI models generate significantly higher-quality argumentative essays than the users of an essay-writing online forum frequented by German high-school students across all criteria in our scoring rubric.

ChatGPT-4 (ChatGPT web interface with the GPT-4 model) significantly outperforms ChatGPT-3 (ChatGPT web interface with the GPT-3.5 default model) with respect to logical structure, language complexity, vocabulary richness and text linking.

Writing styles between humans and generative AI models differ significantly: for instance, the GPT models use more nominalizations and have higher sentence complexity (signaling more complex, ‘scientific’, language), whereas the students make more use of modal and epistemic constructions (which tend to convey speaker attitude).

The linguistic diversity of the NLG models seems to be improving over time: while ChatGPT-3 still has a significantly lower linguistic diversity than humans, ChatGPT-4 has a significantly higher diversity than the students.

Our work goes significantly beyond existing benchmarks. While OpenAI’s technical report on GPT-4 6 presents some benchmarks, their evaluation lacks scientific rigor: it fails to provide vital information like the agreement between raters, does not report on details regarding the criteria for assessment or to what extent and how a statistical analysis was conducted for a larger sample of essays. In contrast, our benchmark provides the first (statistically) rigorous and systematic study of essay quality, paired with a computational linguistic analysis of the language employed by humans and two different versions of ChatGPT, offering a glance at how these NLG models develop over time. While our work is focused on argumentative essays in education, the genre is also relevant beyond education. In general, studying argumentative essays is one important aspect to understand how good generative AI models are at conveying arguments and, consequently, persuasive writing in general.

Related work

Natural language generation.

The recent interest in generative AI models can be largely attributed to the public release of ChatGPT, a public interface in the form of an interactive chat based on the InstructGPT 1 model, more commonly referred to as GPT-3.5. In comparison to the original GPT-3 7 and other similar generative large language models based on the transformer architecture like GPT-J 8 , this model was not trained in a purely self-supervised manner (e.g. through masked language modeling). Instead, a pipeline that involved human-written content was used to fine-tune the model and improve the quality of the outputs to both mitigate biases and safety issues, as well as make the generated text more similar to text written by humans. Such models are referred to as Fine-tuned LAnguage Nets (FLANs). For details on their training, we refer to the literature 9 . Notably, this process was recently reproduced with publicly available models such as Alpaca 10 and Dolly (i.e. the complete models can be downloaded and not just accessed through an API). However, we can only assume that a similar process was used for the training of GPT-4 since the paper by OpenAI does not include any details on model training.

Testing of the language competency of large-scale NLG systems has only recently started. Cai et al. 11 show that ChatGPT reuses sentence structure, accesses the intended meaning of an ambiguous word, and identifies the thematic structure of a verb and its arguments, replicating human language use. Mahowald 12 compares ChatGPT’s acceptability judgments to human judgments on the Article + Adjective + Numeral + Noun construction in English. Dentella et al. 13 show that ChatGPT-3 fails to understand low-frequent grammatical constructions like complex nested hierarchies and self-embeddings. In another recent line of research, the structure of automatically generated language is evaluated. Guo et al. 14 show that in question-answer scenarios, ChatGPT-3 uses different linguistic devices than humans. Zhao et al. 15 show that ChatGPT generates longer and more diverse responses when the user is in an apparently negative emotional state.

Given that we aim to identify certain linguistic characteristics of human-written versus AI-generated content, we also draw on related work in the field of linguistic fingerprinting, which assumes that each human has a unique way of using language to express themselves, i.e. the linguistic means that are employed to communicate thoughts, opinions and ideas differ between humans. That these properties can be identified with computational linguistic means has been showcased across different tasks: the computation of a linguistic fingerprint allows to distinguish authors of literary works 16 , the identification of speaker profiles in large public debates 17 , 18 , 19 , 20 and the provision of data for forensic voice comparison in broadcast debates 21 , 22 . For educational purposes, linguistic features are used to measure essay readability 23 , essay cohesion 24 and language performance scores for essay grading 25 . Integrating linguistic fingerprints also yields performance advantages for classification tasks, for instance in predicting user opinion 26 , 27 and identifying individual users 28 .

Limitations of OpenAIs ChatGPT evaluations

OpenAI published a discussion of the model’s performance of several tasks, including Advanced Placement (AP) classes within the US educational system 6 . The subjects used in performance evaluation are diverse and include arts, history, English literature, calculus, statistics, physics, chemistry, economics, and US politics. While the models achieved good or very good marks in most subjects, they did not perform well in English literature. GPT-3.5 also experienced problems with chemistry, macroeconomics, physics, and statistics. While the overall results are impressive, there are several significant issues: firstly, the conflict of interest of the model’s owners poses a problem for the performance interpretation. Secondly, there are issues with the soundness of the assessment beyond the conflict of interest, which make the generalizability of the results hard to assess with respect to the models’ capability to write essays. Notably, the AP exams combine multiple-choice questions with free-text answers. Only the aggregated scores are publicly available. To the best of our knowledge, neither the generated free-text answers, their overall assessment, nor their assessment given specific criteria from the used judgment rubric are published. Thirdly, while the paper states that 1–2 qualified third-party contractors participated in the rating of the free-text answers, it is unclear how often multiple ratings were generated for the same answer and what was the agreement between them. This lack of information hinders a scientifically sound judgement regarding the capabilities of these models in general, but also specifically for essays. Lastly, the owners of the model conducted their study in a few-shot prompt setting, where they gave the models a very structured template as well as an example of a human-written high-quality essay to guide the generation of the answers. This further fine-tuning of what the models generate could have also influenced the output. The results published by the owners go beyond the AP courses which are directly comparable to our work and also consider other student assessments like Graduate Record Examinations (GREs). However, these evaluations suffer from the same problems with the scientific rigor as the AP classes.

Scientific assessment of ChatGPT

Researchers across the globe are currently assessing the individual capabilities of these models with greater scientific rigor. We note that due to the recency and speed of these developments, the hereafter discussed literature has mostly only been published as pre-prints and has not yet been peer-reviewed. In addition to the above issues concretely related to the assessment of the capabilities to generate student essays, it is also worth noting that there are likely large problems with the trustworthiness of evaluations, because of data contamination, i.e. because the benchmark tasks are part of the training of the model, which enables memorization. For example, Aiyappa et al. 29 find evidence that this is likely the case for benchmark results regarding NLP tasks. This complicates the effort by researchers to assess the capabilities of the models beyond memorization.

Nevertheless, the first assessment results are already available – though mostly focused on ChatGPT-3 and not yet ChatGPT-4. Closest to our work is a study by Yeadon et al. 30 , who also investigate ChatGPT-3 performance when writing essays. They grade essays generated by ChatGPT-3 for five physics questions based on criteria that cover academic content, appreciation of the underlying physics, grasp of subject material, addressing the topic, and writing style. For each question, ten essays were generated and rated independently by five researchers. While the sample size precludes a statistical assessment, the results demonstrate that the AI model is capable of writing high-quality physics essays, but that the quality varies in a manner similar to human-written essays.

Guo et al. 14 create a set of free-text question answering tasks based on data they collected from the internet, e.g. question answering from Reddit. The authors then sample thirty triplets of a question, a human answer, and a ChatGPT-3 generated answer and ask human raters to assess if they can detect which was written by a human, and which was written by an AI. While this approach does not directly assess the quality of the output, it serves as a Turing test 31 designed to evaluate whether humans can distinguish between human- and AI-produced output. The results indicate that humans are in fact able to distinguish between the outputs when presented with a pair of answers. Humans familiar with ChatGPT are also able to identify over 80% of AI-generated answers without seeing a human answer in comparison. However, humans who are not yet familiar with ChatGPT-3 are not capable of identifying AI-written answers about 50% of the time. Moreover, the authors also find that the AI-generated outputs are deemed to be more helpful than the human answers in slightly more than half of the cases. This suggests that the strong results from OpenAI’s own benchmarks regarding the capabilities to generate free-text answers generalize beyond the benchmarks.

There are, however, some indicators that the benchmarks may be overly optimistic in their assessment of the model’s capabilities. For example, Kortemeyer 32 conducts a case study to assess how well ChatGPT-3 would perform in a physics class, simulating the tasks that students need to complete as part of the course: answer multiple-choice questions, do homework assignments, ask questions during a lesson, complete programming exercises, and write exams with free-text questions. Notably, ChatGPT-3 was allowed to interact with the instructor for many of the tasks, allowing for multiple attempts as well as feedback on preliminary solutions. The experiment shows that ChatGPT-3’s performance is in many aspects similar to that of the beginning learners and that the model makes similar mistakes, such as omitting units or simply plugging in results from equations. Overall, the AI would have passed the course with a low score of 1.5 out of 4.0. Similarly, Kung et al. 33 study the performance of ChatGPT-3 in the United States Medical Licensing Exam (USMLE) and find that the model performs at or near the passing threshold. Their assessment is a bit more optimistic than Kortemeyer’s as they state that this level of performance, comprehensible reasoning and valid clinical insights suggest that models such as ChatGPT may potentially assist human learning in clinical decision making.

Frieder et al. 34 evaluate the capabilities of ChatGPT-3 in solving graduate-level mathematical tasks. They find that while ChatGPT-3 seems to have some mathematical understanding, its level is well below that of an average student and in most cases is not sufficient to pass exams. Yuan et al. 35 consider the arithmetic abilities of language models, including ChatGPT-3 and ChatGPT-4. They find that they exhibit the best performance among other currently available language models (incl. Llama 36 , FLAN-T5 37 , and Bloom 38 ). However, the accuracy of basic arithmetic tasks is still only at 83% when considering correctness to the degree of \(10^{-3}\) , i.e. such models are still not capable of functioning reliably as calculators. In a slightly satiric, yet insightful take, Spencer et al. 39 assess how a scientific paper on gamma-ray astrophysics would look like, if it were written largely with the assistance of ChatGPT-3. They find that while the language capabilities are good and the model is capable of generating equations, the arguments are often flawed and the references to scientific literature are full of hallucinations.

The general reasoning skills of the models may also not be at the level expected from the benchmarks. For example, Cherian et al. 40 evaluate how well ChatGPT-3 performs on eleven puzzles that second graders should be able to solve and find that ChatGPT is only able to solve them on average in 36.4% of attempts, whereas the second graders achieve a mean of 60.4%. However, their sample size is very small and the problem was posed as a multiple-choice question answering problem, which cannot be directly compared to the NLG we consider.

Research gap

Within this article, we address an important part of the current research gap regarding the capabilities of ChatGPT (and similar technologies), guided by the following research questions:

RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays?

RQ2: How do AI-generated essays compare to essays written by students?

RQ3: What are linguistic devices that are characteristic of student versus AI-generated content?

We study these aspects with the help of a large group of teaching professionals who systematically assess a large corpus of student essays. To the best of our knowledge, this is the first large-scale, independent scientific assessment of ChatGPT (or similar models) of this kind. Answering these questions is crucial to understanding the impact of ChatGPT on the future of education.

Materials and methods

The essay topics originate from a corpus of argumentative essays in the field of argument mining 41 . Argumentative essays require students to think critically about a topic and use evidence to establish a position on the topic in a concise manner. The corpus features essays for 90 topics from Essay Forum 42 , an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get feedback from native speakers on their essay-writing capabilities. Information about the age of the writers is not available, but the topics indicate that the essays were written in grades 11–13, indicating that the authors were likely at least 16. Topics range from ‘Should students be taught to cooperate or to compete?’ to ‘Will newspapers become a thing of the past?’. In the corpus, each topic features one human-written essay uploaded and discussed in the forum. The students who wrote the essays are not native speakers. The average length of these essays is 19 sentences with 388 tokens (an average of 2.089 characters) and will be termed ‘student essays’ in the remainder of the paper.

For the present study, we use the topics from Stab and Gurevych 41 and prompt ChatGPT with ‘Write an essay with about 200 words on “[ topic ]”’ to receive automatically-generated essays from the ChatGPT-3 and ChatGPT-4 versions from 22 March 2023 (‘ChatGPT-3 essays’, ‘ChatGPT-4 essays’). No additional prompts for getting the responses were used, i.e. the data was created with a basic prompt in a zero-shot scenario. This is in contrast to the benchmarks by OpenAI, who used an engineered prompt in a few-shot scenario to guide the generation of essays. We note that we decided to ask for 200 words because we noticed a tendency to generate essays that are longer than the desired length by ChatGPT. A prompt asking for 300 words typically yielded essays with more than 400 words. Thus, using the shorter length of 200, we prevent a potential advantage for ChatGPT through longer essays, and instead err on the side of brevity. Similar to the evaluations of free-text answers by OpenAI, we did not consider multiple configurations of the model due to the effort required to obtain human judgments. For the same reason, our data is restricted to ChatGPT and does not include other models available at that time, e.g. Alpaca. We use the browser versions of the tools because we consider this to be a more realistic scenario than using the API. Table 1 below shows the core statistics of the resulting dataset. Supplemental material S1 shows examples for essays from the data set.

Annotation study

Study participants.

The participants had registered for a two-hour online training entitled ‘ChatGPT – Challenges and Opportunities’ conducted by the authors of this paper as a means to provide teachers with some of the technological background of NLG systems in general and ChatGPT in particular. Only teachers permanently employed at secondary schools were allowed to register for this training. Focusing on these experts alone allows us to receive meaningful results as those participants have a wide range of experience in assessing students’ writing. A total of 139 teachers registered for the training, 129 of them teach at grammar schools, and only 10 teachers hold a position at other secondary schools. About half of the registered teachers (68 teachers) have been in service for many years and have successfully applied for promotion. For data protection reasons, we do not know the subject combinations of the registered teachers. We only know that a variety of subjects are represented, including languages (English, French and German), religion/ethics, and science. Supplemental material S5 provides some general information regarding German teacher qualifications.

The training began with an online lecture followed by a discussion phase. Teachers were given an overview of language models and basic information on how ChatGPT was developed. After about 45 minutes, the teachers received a both written and oral explanation of the questionnaire at the core of our study (see Supplementary material S3 ) and were informed that they had 30 minutes to finish the study tasks. The explanation included information on how the data was obtained, why we collect the self-assessment, and how we chose the criteria for the rating of the essays, the overall goal of our research, and a walk-through of the questionnaire. Participation in the questionnaire was voluntary and did not affect the awarding of a training certificate. We further informed participants that all data was collected anonymously and that we would have no way of identifying who participated in the questionnaire. We orally informed participants that they consent to the use of the provided ratings for our research by participating in the survey.

Once these instructions were provided orally and in writing, the link to the online form was given to the participants. The online form was running on a local server that did not log any information that could identify the participants (e.g. IP address) to ensure anonymity. As per instructions, consent for participation was given by using the online form. Due to the full anonymity, we could by definition not document who exactly provided the consent. This was implemented as further insurance that non-participation could not possibly affect being awarded the training certificate.

About 20% of the training participants did not take part in the questionnaire study, the remaining participants consented based on the information provided and participated in the rating of essays. After the questionnaire, we continued with an online lecture on the opportunities of using ChatGPT for teaching as well as AI beyond chatbots. The study protocol was reviewed and approved by the Research Ethics Committee of the University of Passau. We further confirm that our study protocol is in accordance with all relevant guidelines.

Questionnaire

The questionnaire consists of three parts: first, a brief self-assessment regarding the English skills of the participants which is based on the Common European Framework of Reference for Languages (CEFR) 43 . We have six levels ranging from ‘comparable to a native speaker’ to ‘some basic skills’ (see supplementary material S3 ). Then each participant was shown six essays. The participants were only shown the generated text and were not provided with information on whether the text was human-written or AI-generated.

The questionnaire covers the seven categories relevant for essay assessment shown below (for details see supplementary material S3 ):

Topic and completeness

Logic and composition

Expressiveness and comprehensiveness

Language mastery

Vocabulary and text linking

Language constructs

These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring.

After rating each essay, the participants were also asked to self-assess their confidence in the ratings. We used a five-point Likert scale based on the criteria for the self-assessment of peer-review scores from the Association for Computational Linguistics (ACL). Once a participant finished rating the six essays, they were shown a summary of their ratings, as well as the individual ratings for each of their essays and the information on how the essay was generated.

Computational linguistic analysis

In order to further explore and compare the quality of the essays written by students and ChatGPT, we consider the six following linguistic characteristics: lexical diversity, sentence complexity, nominalization, presence of modals, epistemic and discourse markers. Those are motivated by previous work: Weiss et al. 25 observe the correlation between measures of lexical, syntactic and discourse complexities to the essay gradings of German high-school examinations while McNamara et al. 45 explore cohesion (indicated, among other things, by connectives), syntactic complexity and lexical diversity in relation to the essay scoring.

Lexical diversity

We identify vocabulary richness by using a well-established measure of textual, lexical diversity (MTLD) 46 which is often used in the field of automated essay grading 25 , 45 , 47 . It takes into account the number of unique words but unlike the best-known measure of lexical diversity, the type-token ratio (TTR), it is not as sensitive to the difference in the length of the texts. In fact, Koizumi and In’nami 48 find it to be least affected by the differences in the length of the texts compared to some other measures of lexical diversity. This is relevant to us due to the difference in average length between the human-written and ChatGPT-generated essays.

Syntactic complexity

We use two measures in order to evaluate the syntactic complexity of the essays. One is based on the maximum depth of the sentence dependency tree which is produced using the spaCy 3.4.2 dependency parser 49 (‘Syntactic complexity (depth)’). For the second measure, we adopt an approach similar in nature to the one by Weiss et al. 25 who use clause structure to evaluate syntactic complexity. In our case, we count the number of conjuncts, clausal modifiers of nouns, adverbial clause modifiers, clausal complements, clausal subjects, and parataxes (‘Syntactic complexity (clauses)’). The supplementary material in S2 shows the difference between sentence complexity based on two examples from the data.

Nominalization is a common feature of a more scientific style of writing 50 and is used as an additional measure for syntactic complexity. In order to explore this feature, we count occurrences of nouns with suffixes such as ‘-ion’, ‘-ment’, ‘-ance’ and a few others which are known to transform verbs into nouns.

Semantic properties

Both modals and epistemic markers signal the commitment of the writer to their statement. We identify modals using the POS-tagging module provided by spaCy as well as a list of epistemic expressions of modality, such as ‘definitely’ and ‘potentially’, also used in other approaches to identifying semantic properties 51 . For epistemic markers we adopt an empirically-driven approach and utilize the epistemic markers identified in a corpus of dialogical argumentation by Hautli-Janisz et al. 52 . We consider expressions such as ‘I think’, ‘it is believed’ and ‘in my opinion’ to be epistemic.

Discourse properties

Discourse markers can be used to measure the coherence quality of a text. This has been explored by Somasundaran et al. 53 who use discourse markers to evaluate the story-telling aspect of student writing while Nadeem et al. 54 incorporated them in their deep learning-based approach to automated essay scoring. In the present paper, we employ the PDTB list of discourse markers 55 which we adjust to exclude words that are often used for purposes other than indicating discourse relations, such as ‘like’, ‘for’, ‘in’ etc.

Statistical methods

We use a within-subjects design for our study. Each participant was shown six randomly selected essays. Results were submitted to the survey system after each essay was completed, in case participants ran out of time and did not finish scoring all six essays. Cronbach’s \(\alpha\) 56 allows us to determine the inter-rater reliability for the rating criterion and data source (human, ChatGPT-3, ChatGPT-4) in order to understand the reliability of our data not only overall, but also for each data source and rating criterion. We use two-sided Wilcoxon-rank-sum tests 57 to confirm the significance of the differences between the data sources for each criterion. We use the same tests to determine the significance of the linguistic characteristics. This results in three comparisons (human vs. ChatGPT-3, human vs. ChatGPT-4, ChatGPT-3 vs. ChatGPT-4) for each of the seven rating criteria and each of the seven linguistic characteristics, i.e. 42 tests. We use the Holm-Bonferroni method 58 for the correction for multiple tests to achieve a family-wise error rate of 0.05. We report the effect size using Cohen’s d 59 . While our data is not perfectly normal, it also does not have severe outliers, so we prefer the clear interpretation of Cohen’s d over the slightly more appropriate, but less accessible non-parametric effect size measures. We report point plots with estimates of the mean scores for each data source and criterion, incl. the 95% confidence interval of these mean values. The confidence intervals are estimated in a non-parametric manner based on bootstrap sampling. We further visualize the distribution for each criterion using violin plots to provide a visual indicator of the spread of the data (see Supplementary material S4 ).

Further, we use the self-assessment of the English skills and confidence in the essay ratings as confounding variables. Through this, we determine if ratings are affected by the language skills or confidence, instead of the actual quality of the essays. We control for the impact of these by measuring Pearson’s correlation coefficient r 60 between the self-assessments and the ratings. We also determine whether the linguistic features are correlated with the ratings as expected. The sentence complexity (both tree depth and dependency clauses), as well as the nominalization, are indicators of the complexity of the language. Similarly, the use of discourse markers should signal a proper logical structure. Finally, a large lexical diversity should be correlated with the ratings for the vocabulary. Same as above, we measure Pearson’s r . We use a two-sided test for the significance based on a \(\beta\) -distribution that models the expected correlations as implemented by scipy 61 . Same as above, we use the Holm-Bonferroni method to account for multiple tests. However, we note that it is likely that all—even tiny—correlations are significant given our amount of data. Consequently, our interpretation of these results focuses on the strength of the correlations.

Our statistical analysis of the data is implemented in Python. We use pandas 1.5.3 and numpy 1.24.2 for the processing of data, pingouin 0.5.3 for the calculation of Cronbach’s \(\alpha\) , scipy 1.10.1 for the Wilcoxon-rank-sum tests Pearson’s r , and seaborn 0.12.2 for the generation of plots, incl. the calculation of error bars that visualize the confidence intervals.

Out of the 111 teachers who completed the questionnaire, 108 rated all six essays, one rated five essays, one rated two essays, and one rated only one essay. This results in 658 ratings for 270 essays (90 topics for each essay type: human-, ChatGPT-3-, ChatGPT-4-generated), with three ratings for 121 essays, two ratings for 144 essays, and one rating for five essays. The inter-rater agreement is consistently excellent ( \(\alpha >0.9\) ), with the exception of language mastery where we have good agreement ( \(\alpha =0.89\) , see Table  2 ). Further, the correlation analysis depicted in supplementary material S4 shows weak positive correlations ( \(r \in 0.11, 0.28]\) ) between the self-assessment for the English skills, respectively the self-assessment for the confidence in ratings and the actual ratings. Overall, this indicates that our ratings are reliable estimates of the actual quality of the essays with a potential small tendency that confidence in ratings and language skills yields better ratings, independent of the data source.

Table  2 and supplementary material S4 characterize the distribution of the ratings for the essays, grouped by the data source. We observe that for all criteria, we have a clear order of the mean values, with students having the worst ratings, ChatGPT-3 in the middle rank, and ChatGPT-4 with the best performance. We further observe that the standard deviations are fairly consistent and slightly larger than one, i.e. the spread is similar for all ratings and essays. This is further supported by the visual analysis of the violin plots.

The statistical analysis of the ratings reported in Table  4 shows that differences between the human-written essays and the ones generated by both ChatGPT models are significant. The effect sizes for human versus ChatGPT-3 essays are between 0.52 and 1.15, i.e. a medium ( \(d \in [0.5,0.8)\) ) to large ( \(d \in [0.8, 1.2)\) ) effect. On the one hand, the smallest effects are observed for the expressiveness and complexity, i.e. when it comes to the overall comprehensiveness and complexity of the sentence structures, the differences between the humans and the ChatGPT-3 model are smallest. On the other hand, the difference in language mastery is larger than all other differences, which indicates that humans are more prone to making mistakes when writing than the NLG models. The magnitude of differences between humans and ChatGPT-4 is larger with effect sizes between 0.88 and 1.43, i.e., a large to very large ( \(d \in [1.2, 2)\) ) effect. Same as for ChatGPT-3, the differences are smallest for expressiveness and complexity and largest for language mastery. Please note that the difference in language mastery between humans and both GPT models does not mean that the humans have low scores for language mastery (M=3.90), but rather that the NLG models have exceptionally high scores (M=5.03 for ChatGPT-3, M=5.25 for ChatGPT-4).

When we consider the differences between the two GPT models, we observe that while ChatGPT-4 has consistently higher mean values for all criteria, only the differences for logic and composition, vocabulary and text linking, and complexity are significant. The effect sizes are between 0.45 and 0.5, i.e. small ( \(d \in [0.2, 0.5)\) ) and medium. Thus, while GPT-4 seems to be an improvement over GPT-3.5 in general, the only clear indicator of this is a better and clearer logical composition and more complex writing with a more diverse vocabulary.

We also observe significant differences in the distribution of linguistic characteristics between all three groups (see Table  3 ). Sentence complexity (depth) is the only category without a significant difference between humans and ChatGPT-3, as well as ChatGPT-3 and ChatGPT-4. There is also no significant difference in the category of discourse markers between humans and ChatGPT-3. The magnitude of the effects varies a lot and is between 0.39 and 1.93, i.e., between small ( \(d \in [0.2, 0.5)\) ) and very large. However, in comparison to the ratings, there is no clear tendency regarding the direction of the differences. For instance, while the ChatGPT models write more complex sentences and use more nominalizations, humans tend to use more modals and epistemic markers instead. The lexical diversity of humans is higher than that of ChatGPT-3 but lower than that of ChatGPT-4. While there is no difference in the use of discourse markers between humans and ChatGPT-3, ChatGPT-4 uses significantly fewer discourse markers.

We detect the expected positive correlations between the complexity ratings and the linguistic markers for sentence complexity ( \(r=0.16\) for depth, \(r=0.19\) for clauses) and nominalizations ( \(r=0.22\) ). However, we observe a negative correlation between the logic ratings and the discourse markers ( \(r=-0.14\) ), which counters our intuition that more frequent use of discourse indicators makes a text more logically coherent. However, this is in line with previous work: McNamara et al. 45 also find no indication that the use of cohesion indices such as discourse connectives correlates with high- and low-proficiency essays. Finally, we observe the expected positive correlation between the ratings for the vocabulary and the lexical diversity ( \(r=0.12\) ). All observed correlations are significant. However, we note that the strength of all these correlations is weak and that the significance itself should not be over-interpreted due to the large sample size.

Our results provide clear answers to the first two research questions that consider the quality of the generated essays: ChatGPT performs well at writing argumentative student essays and outperforms the quality of the human-written essays significantly. The ChatGPT-4 model has (at least) a large effect and is on average about one point better than humans on a seven-point Likert scale.

Regarding the third research question, we find that there are significant linguistic differences between humans and AI-generated content. The AI-generated essays are highly structured, which for instance is reflected by the identical beginnings of the concluding sections of all ChatGPT essays (‘In conclusion, [...]’). The initial sentences of each essay are also very similar starting with a general statement using the main concepts of the essay topics. Although this corresponds to the general structure that is sought after for argumentative essays, it is striking to see that the ChatGPT models are so rigid in realizing this, whereas the human-written essays are looser in representing the guideline on the linguistic surface. Moreover, the linguistic fingerprint has the counter-intuitive property that the use of discourse markers is negatively correlated with logical coherence. We believe that this might be due to the rigid structure of the generated essays: instead of using discourse markers, the AI models provide a clear logical structure by separating the different arguments into paragraphs, thereby reducing the need for discourse markers.

Our data also shows that hallucinations are not a problem in the setting of argumentative essay writing: the essay topics are not really about factual correctness, but rather about argumentation and critical reflection on general concepts which seem to be contained within the knowledge of the AI model. The stochastic nature of the language generation is well-suited for this kind of task, as different plausible arguments can be seen as a sampling from all available arguments for a topic. Nevertheless, we need to perform a more systematic study of the argumentative structures in order to better understand the difference in argumentation between human-written and ChatGPT-generated essay content. Moreover, we also cannot rule out that subtle hallucinations may have been overlooked during the ratings. There are also essays with a low rating for the criteria related to factual correctness, indicating that there might be cases where the AI models still have problems, even if they are, on average, better than the students.

One of the issues with evaluations of the recent large-language models is not accounting for the impact of tainted data when benchmarking such models. While it is certainly possible that the essays that were sourced by Stab and Gurevych 41 from the internet were part of the training data of the GPT models, the proprietary nature of the model training means that we cannot confirm this. However, we note that the generated essays did not resemble the corpus of human essays at all. Moreover, the topics of the essays are general in the sense that any human should be able to reason and write about these topics, just by understanding concepts like ‘cooperation’. Consequently, a taint on these general topics, i.e. the fact that they might be present in the data, is not only possible but is actually expected and unproblematic, as it relates to the capability of the models to learn about concepts, rather than the memorization of specific task solutions.

While we did everything to ensure a sound construct and a high validity of our study, there are still certain issues that may affect our conclusions. Most importantly, neither the writers of the essays, nor their raters, were English native speakers. However, the students purposefully used a forum for English writing frequented by native speakers to ensure the language and content quality of their essays. This indicates that the resulting essays are likely above average for non-native speakers, as they went through at least one round of revisions with the help of native speakers. The teachers were informed that part of the training would be in English to prevent registrations from people without English language skills. Moreover, the self-assessment of the language skills was only weakly correlated with the ratings, indicating that the threat to the soundness of our results is low. While we cannot definitively rule out that our results would not be reproducible with other human raters, the high inter-rater agreement indicates that this is unlikely.

However, our reliance on essays written by non-native speakers affects the external validity and the generalizability of our results. It is certainly possible that native speaking students would perform better in the criteria related to language skills, though it is unclear by how much. However, the language skills were particular strengths of the AI models, meaning that while the difference might be smaller, it is still reasonable to conclude that the AI models would have at least comparable performance to humans, but possibly still better performance, just with a smaller gap. While we cannot rule out a difference for the content-related criteria, we also see no strong argument why native speakers should have better arguments than non-native speakers. Thus, while our results might not fully translate to native speakers, we see no reason why aspects regarding the content should not be similar. Further, our results were obtained based on high-school-level essays. Native and non-native speakers with higher education degrees or experts in fields would likely also achieve a better performance, such that the difference in performance between the AI models and humans would likely also be smaller in such a setting.

We further note that the essay topics may not be an unbiased sample. While Stab and Gurevych 41 randomly sampled the essays from the writing feedback section of an essay forum, it is unclear whether the essays posted there are representative of the general population of essay topics. Nevertheless, we believe that the threat is fairly low because our results are consistent and do not seem to be influenced by certain topics. Further, we cannot with certainty conclude how our results generalize beyond ChatGPT-3 and ChatGPT-4 to similar models like Bard ( https://bard.google.com/?hl=en ) Alpaca, and Dolly. Especially the results for linguistic characteristics are hard to predict. However, since—to the best of our knowledge and given the proprietary nature of some of these models—the general approach to how these models work is similar and the trends for essay quality should hold for models with comparable size and training procedures.

Finally, we want to note that the current speed of progress with generative AI is extremely fast and we are studying moving targets: ChatGPT 3.5 and 4 today are already not the same as the models we studied. Due to a lack of transparency regarding the specific incremental changes, we cannot know or predict how this might affect our results.

Our results provide a strong indication that the fear many teaching professionals have is warranted: the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native speakers, our results show that when students want to maximize their essay grades, they could easily do so by relying on results from AI models like ChatGPT. The very strong performance of the AI models indicates that this might also be the case for native speakers, though the difference in language skills is probably smaller. However, this is not and cannot be the goal of education. Consequently, educators need to change how they approach homework. Instead of just assigning and grading essays, we need to reflect more on the output of AI tools regarding their reasoning and correctness. AI models need to be seen as an integral part of education, but one which requires careful reflection and training of critical thinking skills.

Furthermore, teachers need to adapt strategies for teaching writing skills: as with the use of calculators, it is necessary to critically reflect with the students on when and how to use those tools. For instance, constructivists 62 argue that learning is enhanced by the active design and creation of unique artifacts by students themselves. In the present case this means that, in the long term, educational objectives may need to be adjusted. This is analogous to teaching good arithmetic skills to younger students and then allowing and encouraging students to use calculators freely in later stages of education. Similarly, once a sound level of literacy has been achieved, strongly integrating AI models in lesson plans may no longer run counter to reasonable learning goals.

In terms of shedding light on the quality and structure of AI-generated essays, this paper makes an important contribution by offering an independent, large-scale and statistically sound account of essay quality, comparing human-written and AI-generated texts. By comparing different versions of ChatGPT, we also offer a glance into the development of these models over time in terms of their linguistic properties and the quality they exhibit. Our results show that while the language generated by ChatGPT is considered very good by humans, there are also notable structural differences, e.g. in the use of discourse markers. This demonstrates that an in-depth consideration not only of the capabilities of generative AI models is required (i.e. which tasks can they be used for), but also of the language they generate. For example, if we read many AI-generated texts that use fewer discourse markers, it raises the question if and how this would affect our human use of discourse markers. Understanding how AI-generated texts differ from human-written enables us to look for these differences, to reason about their potential impact, and to study and possibly mitigate this impact.

Data availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.8343644

Code availability

All materials are available online in form of a replication package that contains the data and the analysis code, https://doi.org/10.5281/zenodo.8343644 .

Ouyang, L. et al. Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 .

Ruby, D. 30+ detailed chatgpt statistics–users & facts (sep 2023). https://www.demandsage.com/chatgpt-statistics/ (2023). Accessed 09 June 2023.

Leahy, S. & Mishra, P. TPACK and the Cambrian explosion of AI. In Society for Information Technology & Teacher Education International Conference , (ed. Langran, E.) 2465–2469 (Association for the Advancement of Computing in Education (AACE), 2023).

Ortiz, S. Need an ai essay writer? here’s how chatgpt (and other chatbots) can help. https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/ (2023). Accessed 09 June 2023.

Openai chat interface. https://chat.openai.com/ . Accessed 09 June 2023.

OpenAI. Gpt-4 technical report (2023). arXiv:2303.08774 .

Brown, T. B. et al. Language models are few-shot learners (2020). arXiv:2005.14165 .

Wang, B. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax (2021).

Wei, J. et al. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (2022).

Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).

Cai, Z. G., Haslett, D. A., Duan, X., Wang, S. & Pickering, M. J. Does chatgpt resemble humans in language use? (2023). arXiv:2303.08014 .

Mahowald, K. A discerning several thousand judgments: Gpt-3 rates the article + adjective + numeral + noun construction (2023). arXiv:2301.12564 .

Dentella, V., Murphy, E., Marcus, G. & Leivada, E. Testing ai performance on less frequent aspects of language reveals insensitivity to underlying meaning (2023). arXiv:2302.12313 .

Guo, B. et al. How close is chatgpt to human experts? comparison corpus, evaluation, and detection (2023). arXiv:2301.07597 .

Zhao, W. et al. Is chatgpt equipped with emotional dialogue capabilities? (2023). arXiv:2304.09582 .

Keim, D. A. & Oelke, D. Literature fingerprinting : A new method for visual literary analysis. In 2007 IEEE Symposium on Visual Analytics Science and Technology , 115–122, https://doi.org/10.1109/VAST.2007.4389004 (IEEE, 2007).

El-Assady, M. et al. Interactive visual analysis of transcribed multi-party discourse. In Proceedings of ACL 2017, System Demonstrations , 49–54 (Association for Computational Linguistics, Vancouver, Canada, 2017).

Mennatallah El-Assady, A. H.-J. & Butt, M. Discourse maps - feature encoding for the analysis of verbatim conversation transcripts. In Visual Analytics for Linguistics , vol. CSLI Lecture Notes, Number 220, 115–147 (Stanford: CSLI Publications, 2020).

Matt Foulis, J. V. & Reed, C. Dialogical fingerprinting of debaters. In Proceedings of COMMA 2020 , 465–466, https://doi.org/10.3233/FAIA200536 (Amsterdam: IOS Press, 2020).

Matt Foulis, J. V. & Reed, C. Interactive visualisation of debater identification and characteristics. In Proceedings of the COMMA workshop on Argument Visualisation, COMMA , 1–7 (2020).

Chatzipanagiotidis, S., Giagkou, M. & Meurers, D. Broad linguistic complexity analysis for Greek readability classification. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications , 48–58 (Association for Computational Linguistics, Online, 2021).

Ajili, M., Bonastre, J.-F., Kahn, J., Rossato, S. & Bernard, G. FABIOLE, a speech database for forensic speaker comparison. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) , 726–733 (European Language Resources Association (ELRA), Portorož, Slovenia, 2016).

Deutsch, T., Jasbi, M. & Shieber, S. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications , 1–17, https://doi.org/10.18653/v1/2020.bea-1.1 (Association for Computational Linguistics, Seattle, WA, USA \(\rightarrow\) Online, 2020).

Fiacco, J., Jiang, S., Adamson, D. & Rosé, C. Toward automatic discourse parsing of student writing motivated by neural interpretation. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) , 204–215, https://doi.org/10.18653/v1/2022.bea-1.25 (Association for Computational Linguistics, Seattle, Washington, 2022).

Weiss, Z., Riemenschneider, A., Schröter, P. & Meurers, D. Computationally modeling the impact of task-appropriate language complexity and accuracy on human grading of German essays. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 30–45, https://doi.org/10.18653/v1/W19-4404 (Association for Computational Linguistics, Florence, Italy, 2019).

Yang, F., Dragut, E. & Mukherjee, A. Predicting personal opinion on future events with fingerprints. In Proceedings of the 28th International Conference on Computational Linguistics , 1802–1807, https://doi.org/10.18653/v1/2020.coling-main.162 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020).

Tumarada, K. et al. Opinion prediction with user fingerprinting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , 1423–1431 (INCOMA Ltd., Held Online, 2021).

Rocca, R. & Yarkoni, T. Language as a fingerprint: Self-supervised learning of user encodings using transformers. In Findings of the Association for Computational Linguistics: EMNLP . 1701–1714 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on chatgpt? (2023). arXiv:2303.12767 .

Yeadon, W., Inyang, O.-O., Mizouri, A., Peach, A. & Testrow, C. The death of the short-form physics essay in the coming ai revolution (2022). arXiv:2212.11661 .

TURING, A. M. I.-COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX , 433–460, https://doi.org/10.1093/mind/LIX.236.433 (1950). https://academic.oup.com/mind/article-pdf/LIX/236/433/30123314/lix-236-433.pdf .

Kortemeyer, G. Could an artificial-intelligence agent pass an introductory physics course? (2023). arXiv:2301.12127 .

Kung, T. H. et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLOS Digital Health 2 , 1–12. https://doi.org/10.1371/journal.pdig.0000198 (2023).

Article   Google Scholar  

Frieder, S. et al. Mathematical capabilities of chatgpt (2023). arXiv:2301.13867 .

Yuan, Z., Yuan, H., Tan, C., Wang, W. & Huang, S. How well do large language models perform in arithmetic tasks? (2023). arXiv:2304.02015 .

Touvron, H. et al. Llama: Open and efficient foundation language models (2023). arXiv:2302.13971 .

Chung, H. W. et al. Scaling instruction-finetuned language models (2022). arXiv:2210.11416 .

Workshop, B. et al. Bloom: A 176b-parameter open-access multilingual language model (2023). arXiv:2211.05100 .

Spencer, S. T., Joshi, V. & Mitchell, A. M. W. Can ai put gamma-ray astrophysicists out of a job? (2023). arXiv:2303.17853 .

Cherian, A., Peng, K.-C., Lohit, S., Smith, K. & Tenenbaum, J. B. Are deep neural networks smarter than second graders? (2023). arXiv:2212.09993 .

Stab, C. & Gurevych, I. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , 1501–1510 (Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 2014).

Essay forum. https://essayforum.com/ . Last-accessed: 2023-09-07.

Common european framework of reference for languages (cefr). https://www.coe.int/en/web/common-european-framework-reference-languages . Accessed 09 July 2023.

Kmk guidelines for essay assessment. http://www.kmk-format.de/material/Fremdsprachen/5-3-2_Bewertungsskalen_Schreiben.pdf . Accessed 09 July 2023.

McNamara, D. S., Crossley, S. A. & McCarthy, P. M. Linguistic features of writing quality. Writ. Commun. 27 , 57–86 (2010).

McCarthy, P. M. & Jarvis, S. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behav. Res. Methods 42 , 381–392 (2010).

Article   PubMed   Google Scholar  

Dasgupta, T., Naskar, A., Dey, L. & Saha, R. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications , 93–102 (2018).

Koizumi, R. & In’nami, Y. Effects of text length on lexical diversity measures: Using short texts with less than 200 tokens. System 40 , 554–564 (2012).

spacy industrial-strength natural language processing in python. https://spacy.io/ .

Siskou, W., Friedrich, L., Eckhard, S., Espinoza, I. & Hautli-Janisz, A. Measuring plain language in public service encounters. In Proceedings of the 2nd Workshop on Computational Linguistics for Political Text Analysis (CPSS-2022) (Potsdam, Germany, 2022).

El-Assady, M. & Hautli-Janisz, A. Discourse Maps - Feature Encoding for the Analysis of Verbatim Conversation Transcripts (CSLI lecture notes (CSLI Publications, Center for the Study of Language and Information, 2019).

Hautli-Janisz, A. et al. QT30: A corpus of argument and conflict in broadcast debate. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , 3291–3300 (European Language Resources Association, Marseille, France, 2022).

Somasundaran, S. et al. Towards evaluating narrative quality in student writing. Trans. Assoc. Comput. Linguist. 6 , 91–106 (2018).

Nadeem, F., Nguyen, H., Liu, Y. & Ostendorf, M. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 484–493, https://doi.org/10.18653/v1/W19-4450 (Association for Computational Linguistics, Florence, Italy, 2019).

Prasad, R. et al. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08) (European Language Resources Association (ELRA), Marrakech, Morocco, 2008).

Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16 , 297–334. https://doi.org/10.1007/bf02310555 (1951).

Article   MATH   Google Scholar  

Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1 , 80–83 (1945).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

MathSciNet   MATH   Google Scholar  

Cohen, J. Statistical power analysis for the behavioral sciences (Academic press, 2013).

Freedman, D., Pisani, R. & Purves, R. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York (2007).

Scipy documentation. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html . Accessed 09 June 2023.

Windschitl, M. Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Rev. Educ. Res. 72 , 131–175 (2002).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Computer Science and Mathematics, University of Passau, Passau, Germany

Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva & Alexander Trautsch

You can also search for this author in PubMed   Google Scholar

Contributions

S.H., A.HJ., and U.H. conceived the experiment; S.H., A.HJ, and Z.K. collected the essays from ChatGPT; U.H. recruited the study participants; S.H., A.HJ., U.H. and A.T. conducted the training session and questionnaire; all authors contributed to the analysis of the results, the writing of the manuscript, and review of the manuscript.

Corresponding author

Correspondence to Steffen Herbold .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., supplementary information 3., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13 , 18617 (2023). https://doi.org/10.1038/s41598-023-45644-9

Download citation

Received : 01 June 2023

Accepted : 22 October 2023

Published : 30 October 2023

DOI : https://doi.org/10.1038/s41598-023-45644-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

chat gpt for scholarship essays

Our Services

College Admissions Counseling

UK University Admissions Counseling

EU University Admissions Counseling

College Athletic Recruitment

Crimson Rise: College Prep for Middle Schoolers

Indigo Research: Online Research Opportunities for High Schoolers

Delta Institute: Work Experience Programs For High Schoolers

Graduate School Admissions Counseling

Private Boarding & Day School Admissions

Essay Review

Financial Aid & Merit Scholarships

Our Leaders and Counselors

Our Student Success

Crimson Student Alumni

Our Results

Our Reviews

Our Scholarships

Careers at Crimson

University Profiles

US College Admissions Calculator

GPA Calculator

Practice Standardized Tests

SAT Practice Test

ACT Practice Tests

Personal Essay Topic Generator

eBooks and Infographics

Crimson YouTube Channel

Summer Apply - Best Summer Programs

Top of the Class Podcast

ACCEPTED! Book by Jamie Beaton

Crimson Global Academy

+1 (646) 419-3178

Go back to all articles

Will ChatGPT Affect the College Application Essay?

Will ChatGPT Affect the College Application Essay?

The new AI-based writing tool, ChatGPT, has taken the world by storm. The New York Times calls it “the best artificial intelligence chatbot ever released to the general public.” More than one million people signed up to test it in the first five days after its release. But will this tool change the landscape of college admissions as students start using it to write their college essays? And should you be using it?

ChatGPT, a large language model developed by OpenAI, has the potential to assist students in writing college essays. However, it is important to note that the model should not be used as a substitute for a student's own writing and critical thinking skills.

Every publication out there has weighed in on what this new tool will mean for college-going students. Forbes got the service to write two different college essays in just 20 minutes!

Interested in learning more? Attend one of our free events

Top college athletic recruitment 101.

Wednesday, October 9, 2024 1:00 AM CUT

Discover the insider strategies and essential tips for securing athletic recruitment at top colleges and universities. Learn from a former Division 1 athlete and recruitment expert on how to stand out in the competitive world of college athletics.

REGISTER NOW

Some History Around AI

John McCarthy, Professor Emeritus at Stanford University, explains that AI is the science and engineering of making intelligent machines . The forerunner to these smart machines was built in 1951 by Marvin Minsky, Professor of Electrical Engineering and Computer Science.

You’ll already have seen AI in action: search engines, GPS navigators and self-driving cars are all AI-managed. More and more schools and universities are adopting the technology to enhance teaching and learning, too.

Modern-day AI displays some of the behaviours you’d usually associate with human intelligence – planning, learning, reasoning, problem-solving, perception and social intelligence. It combines these behaviours with the capacity to process vast amounts of data far faster and with less error than a human brain. This gives it the power to help solve problems in almost every area of daily life.

The Benefits of ChatGPT

ChatGPT is a machine learning model that is trained on a large dataset of text, allowing it to generate human-like responses to prompts. This technology has many potential applications, including writing. With the ability to generate coherent and grammatically correct sentences, ChatGPT could be used to assist students in writing college essays.

One of the key benefits of using ChatGPT for essay writing is its ability to generate ideas and potential topic sentences. This can be particularly helpful for students who struggle with writer's block or are unsure of what to write about. By providing a prompt to the model, such as the prompt for an essay, ChatGPT can generate a list of potential topic sentences or ideas for the student to consider.

Another potential benefit of using ChatGPT for essay writing is its ability to generate coherent and grammatically correct sentences . This can be beneficial for students who struggle with grammar and sentence structure. ChatGPT can provide a starting point for the student, who can then edit and revise the generated text to suit their own writing style and voice. Tools such as Grammarly and Google Docs’ Smart Compose already provide these services and are other examples of AI-based edtech tools that have already entered the classroom.

Blog Banner

Too Early To Tell

However, it is important to note that ChatGPT is not a substitute for a student's own writing and critical thinking skills. While the model can assist with generating ideas and providing grammatically correct sentences, it cannot replace a student's own creativity and understanding of the topic. Additionally, ChatGPT's output is based on the data it has been trained on, which may not always align with the student's own perspective or understanding of the topic.

Jim Jump, the academic dean and director of college counseling at St. Christopher’s School in Richmond, Va., told Inside Higher Ed that he does not see the tool taking over basic writing skills. He read the Forbes essays and said that both essays “resemble cliché essays, with neither answering the prompt in a convincing way.”

The most important aspect of a student’s college essay is the personal details that are added in there. That is what helps admissions officers better understand the student and whether they will thrive at a certain university. Like Jump says, “Great essays have a spark to them that is not about the ability to write but rather the ability to think. Great personal essays are clever and insightful, with an authenticity and a sincerity that’s—well, personal.”

It's also important to note that many institutions have strict policies against plagiarism, which includes using AI generated content. It's highly recommended to check the policies of your university or college before using AI generated content in any work submitted.

Additionally, it's important to understand the ethical implications of using AI generated content. Even though ChatGPT may assist in generating grammatically correct sentences, it does not have the ability to understand the context of the essay or the ethical implications of the content it generates. It's the responsibility of the student to ensure that the content generated is ethical and appropriate.

At this point, many are approaching ChatGPT the saw way society approached any new invention – with trepidation and amazement. The calculator in its time probably met the same kind of discussions about whether it was the “death of math”. Several educators believe that while the bot can aggregate knowledge, it can not actually synthesize a unique theory or idea, the Brookings Institution reported.

Many people argue that tools will be invented that will help professors determine whether a student has used AI in their essays. A Princeton University graduate has just created an app that can tell whether an essay has been written by ChatGPT, NPR has reported. GPTZero uses two indicators – perplexity and burstiness – to determine whether a piece of writing has been generated by a bot. "For so long, AI has been a black box where we really don't know what's going on inside," said the app’s inventor Edward Tian. "And with GPTZero, I wanted to start pushing back and fighting against that."

Final Thoughts

In conclusion, ChatGPT has the potential to assist students in writing college essays. However, it is important to remember that the model should not be used as a substitute for a student's own writing and critical thinking skills. For the time being it does not seem that the college essay is going anywhere. Additionally, it's important to check the policies of the university or college before using AI generated content and to be mindful of the ethical implications of the content generated.

Blog Banner

Key Resources & Further Reading

  • Join our free webinars on US university applications
  • Free eBooks and guides to help with the college application process
  • College Admission Calculator
  • How to apply to college?
  • College application process explained

More Articles

How to answer stanford's 2024/2025 supplemental essays: tips & insights.

How To Answer Stanford's 2024/2025 Supplemental Essays: Tips & Insights

Your Ultimate Guide to the Johns Hopkins Supplemental Essay Prompt 2024

Your Ultimate Guide to the Johns Hopkins Supplemental Essay Prompt 2024

How To Answer UPenn's 2024/25 Supplemental Essays: Tips & Insights

How To Answer UPenn's 2024/25 Supplemental Essays: Tips & Insights

Have a thesis expert improve your writing

Check your thesis for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

  • How to Write an Essay with ChatGPT | Tips & Examples

How to Write an Essay with ChatGPT | Tips & Examples

Published on 26 June 2023 by Koen Driessen .

Passing off AI-generated text as your own work is widely considered plagiarism. However, when used correctly, generative AI tools like ChatGPT can legitimately help guide your writing process.

These tools are especially helpful in the preparation and revision stages of your essay writing.

You can use ChatGPT to:

  • Write a research question
  • Develop an outline
  • Find relevant source information
  • Summarise or paraphrase text
  • Get feedback

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

Writing a research question, developing an outline, finding source recommendations, summarising/paraphrasing text, getting feedback, other interesting articles, frequently asked questions.

You can use ChatGPT to brainstorm potential research questions or to narrow down your thesis statement . Begin by inputting a description of the research topic or assigned question. Then include a prompt like “Write 3 possible research questions on this topic”.

You can make the prompt as specific as you like. For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative , descriptive , expository , or narrative ).

You can also mention any facts or viewpoints you’ve gathered that should be incorporated into the output.

If the output doesn’t suit your topic, you can click “Regenerate response” to have the tool generate a new response. You can do this as many times as you like, and you can try making your prompt more specific if you struggle to get the results you want.

Example: Using ChatGPT to write a research question

  • Generate three possible research questions for an argumentative high school essay on the following topic: “The long-term impact of the Covid-19 pandemic.”
  • To what extent has the Covid-19 pandemic exacerbated existing social inequalities and what measures can be taken to address these disparities in the long term?
  • How has the Covid-19 pandemic transformed education systems, and what are the potential long-term implications for students’ learning outcomes and educational equity?
  • What lessons can be learned from the Covid-19 pandemic to enhance global preparedness and response to future public health crises, and how can these lessons be effectively implemented in the long term?
  • Based on the second question, develop three further research questions with a specific focus on distance learning.
  • What are the long-term effects of widespread adoption of distance learning during the Covid-19 pandemic on students’ academic performance, motivation, and overall educational outcomes?
  • How has the shift to distance learning impacted the social and emotional well-being of students, and what strategies can be implemented to address any negative consequences in the long term?
  • What are the key factors that contribute to the success of distance learning programs, and how can educational institutions and policymakers ensure equitable access to quality education in the long term, especially for marginalized and underserved student populations?

The only proofreading tool specialized in correcting academic writing

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

chat gpt for scholarship essays

Correct my document today

Once you’ve decided on a research question, you can use ChatGPT to develop an essay outline . This can help keep you on track by giving you a clear idea of what topics you want to discuss and in what order.

Do this by prompting ChatGPT to create an outline for a specific research question, mentioning any topics or points you want to discuss. You can also mention the writing level and the intended length of your essay so that the tool generates an appropriate outline.

You can then refine this by further prompting ChatGPT or editing the outline manually until it meets your requirements.

Examples: Using ChatGPT to generate an essay outline

  • Develop an outline for an argumentative high school essay with the following research question: “What are the long-term effects of widespread adoption of distance learning during the Covid-19 pandemic on students’ academic performance and overall educational outcomes?”The essay will be about 4 pages long.
  • Background information on the Covid-19 pandemic and the shift to distance learning
  • Thesis statement: The widespread adoption of distance learning during the Covid-19 pandemic has had both positive and negative long-term effects on students’ academic performance and overall educational outcomes.
  • Increased flexibility and personalized learning opportunities
  • Access to a wider range of resources and educational materials
  • Development of self-directed learning skills
  • Critiques and limitations of previous studies
  • Lack of face-to-face interaction and reduced social engagement
  • Increased potential for distractions and lack of structure
  • Technological barriers and unequal access to resources
  • Examination of studies and data on academic achievement during distance learning
  • Analysis of long-term effects on educational attainment and future opportunities
  • Discussion of potential implications for college admissions and career readiness
  • Strategies for promoting student engagement and motivation in distance learning
  • Importance of teacher-student communication and support
  • Addressing technological disparities and ensuring equitable access to resources
  • Recap of the main points discussed
  • Emphasize the need for further research and long-term planning to optimize distance learning experiences
  • Restate the thesis and provide a final thought on the potential long-term effects of distance learning on students’ academic performance and overall educational outcomes

Once you know the scope of your essay, you can find relevant primary and secondary sources to support your argument.

However, we don’t recommend prompting ChatGPT to generate a list of sources as it occasionally makes mistakes (like listing nonexistent sources). Instead, it’s a good idea to use ChatGPT to get suggestions for the types of sources relevant to your essay and track them down using a credible research database or your institution’s library.

When you have found relevant sources, use a specialised tool like the Scribbr Citation Generator to cite them in your essay.

Example: Using ChatGPT to find sources

  • What types of sources can I use to write an essay on the following research question?”What are the long-term effects of widespread adoption of distance learning during the Covid-19 pandemic on students’ academic performance and overall educational outcomes?”
  • Academic Journals and Research Articles: Look for scholarly articles and research studies published in peer-reviewed academic journals. These sources often provide in-depth analysis, empirical evidence, and statistical data on the long-term effects of distance learning.
  • Educational and Government Reports: Reports published by educational institutions, government agencies, or educational organizations can offer valuable insights into the impact of distance learning on academic performance and educational outcomes. These reports may include surveys, case studies, and data analysis specific to the Covid-19 pandemic and distance learning.
  • Official Websites and Publications: Explore the official websites of educational institutions, education ministries, or educational technology companies. They often publish articles, white papers, or research findings related to distance learning and its effects on students’ academic performance.
  • Expert Interviews or Opinion Pieces: Seek out interviews or opinion pieces from experts in the field of education, distance learning, or pedagogy. These sources can provide valuable perspectives and insights on the long-term effects of distance learning on students’ educational outcomes.
  • Online Databases and Digital Libraries: Utilize online databases and digital libraries such as JSTOR, Google Scholar, or academic databases provided by your educational institution. These platforms offer a wide range of scholarly articles, research papers, and publications that can support your arguments.
  • Surveys and Studies Conducted by Educational Institutions: Some schools or universities may have conducted their own surveys or studies to evaluate the impact of distance learning on students. Check if your educational institution has published any research findings or survey results that are relevant to your topic.

During your writing process, you can use ChatGPT as a summarising tool to condense text to its essential ideas or as a paraphraser to rephrase text in clear, accessible language. Using ChatGPT in these ways can help you to understand complex material, express your own ideas more clearly, and avoid repetition.

Simply input the relevant text and prompt the tool to summarise or paraphrase it. Alternatively, you can use Scribbr’s free text summariser and Scribbr’s free paraphrasing tool , which are specifically designed for these purposes.

Example: Using ChatGPT to summarise/paraphrase

  • Paraphrase the following text using clear language:”Engaging in an extensive examination of the intricate interplay between remote instructional methods and academic achievement necessitates consideration of diverse factors, including the pedagogical efficacy of virtual platforms, the socioemotional ramifications of reduced face-to-face interactions, the equitable accessibility of technological resources, and the potential long-lasting implications for students’ educational trajectories and future opportunities.”
  • To thoroughly explore the connection between remote teaching methods and academic success, it is crucial to take into account various factors. These factors include evaluating how well virtual platforms support effective teaching, understanding the impact of limited in-person interactions on students’ emotional well-being, ensuring fair access to technology resources, and recognizing the potential long-term effects on students’ educational paths and future prospects.

Once you’ve written your essay, you can prompt ChatGPT to provide feedback and recommend improvements.

You can indicate how the tool should provide feedback (e.g., “Act like a university professor examining papers”) and include the specific points you want to receive feedback on (e.g., consistency of tone, clarity of argument, appropriateness of evidence).

While this is not an adequate substitute for an experienced academic supervisor, it can help you with quick preliminary feedback.

You can also use ChatGPT to check grammar mistakes. However, ChatGPT sometimes misses errors and on rare occasions may even introduce new grammatical mistakes. We suggest using a tool like Scribbr’s free grammar checker , which is designed specifically for this purpose. Or, for more in-depth feedback, try Scribbr’s proofreading and editing service .

Example: Using ChatGPT for feedback

  • Feedback the above text with regard to consistency of tone, clarity of argument, appropriateness of evidence, and coherence of structure.
  • Clarity of Argument: The argument could be further clarified by explicitly connecting the discussed factors to their impact on academic performance. For example, explicitly stating how the proficiency of virtual platforms and equitable access to technology affect academic performance would strengthen the clarity of the argument.
  • Appropriateness of Evidence: The evidence provided is generally appropriate and relevant. To enhance the strength of the evidence, consider incorporating specific examples, research studies, or statistics that directly support the discussed factors and their impact on academic performance. This would provide stronger evidence for the claims made.
  • Coherence of Structure: The structure of the text is coherent, with a clear introduction, body paragraphs addressing different factors, and a conclusion. To improve coherence, consider adding transitional phrases or sentences at the beginning or end of paragraphs to facilitate the smooth flow of ideas between paragraphs. This would enhance the overall structure and make it easier for the reader to follow the argument.

Overall, the text demonstrates a consistent tone, a clear argument, appropriate evidence, and a coherent structure. Clarifying the argument by explicitly connecting the factors to their impact, incorporating stronger evidence, and adding transitional phrases for better coherence would further enhance the text’s effectiveness. Note Passing off AI-generated text as your own work is generally considered plagiarism (or at least academic dishonesty ) and may result in an automatic fail and other negative consequences . AI detectors may be used to detect this offence.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Is ChatGPT trustworthy?
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

Yes, you can use ChatGPT to summarise text . This can help you understand complex information more easily, summarise the central argument of your own paper, or clarify your research question.

You can also use Scribbr’s free text summariser , which is designed specifically for this purpose.

Yes, you can use ChatGPT to paraphrase text to help you express your ideas more clearly, explore different ways of phrasing your arguments, and avoid repetition.

However, it’s not specifically designed for this purpose. We recommend using a specialised tool like Scribbr’s free paraphrasing tool , which will provide a smoother user experience.

Using AI writing tools (like ChatGPT ) to write your essay is usually considered plagiarism and may result in penalisation, unless it is allowed by your university. Text generated by AI tools is based on existing texts and therefore cannot provide unique insights. Furthermore, these outputs sometimes contain factual inaccuracies or grammar mistakes.

However, AI writing tools can be used effectively as a source of feedback and inspiration for your writing (e.g., to generate research questions ). Other AI tools, like grammar checkers, can help identify and eliminate grammar and punctuation mistakes to enhance your writing.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Driessen, K. (2023, June 26). How to Write an Essay with ChatGPT | Tips & Examples. Scribbr. Retrieved 23 September 2024, from https://www.scribbr.co.uk/using-ai-tools/chatgpt-essays/

Is this article helpful?

Koen Driessen

Koen Driessen

Other students also liked, how to write good chatgpt prompts, how to use chatgpt in your studies, how to use chatgpt | basics & tips.

IMAGES

  1. How To use Chat GPT To Write an Essay

    chat gpt for scholarship essays

  2. How to Get Chat GPT to Write an Essay: A Comprehensive Guide

    chat gpt for scholarship essays

  3. How to use Chat GPT to Write an Essay: Begginers Tips

    chat gpt for scholarship essays

  4. How To use chat GPT to write an Essay || Step By Step Guide with Examples

    chat gpt for scholarship essays

  5. How to write an Essay Using Chat GPT (without getting caught!!!)

    chat gpt for scholarship essays

  6. How To Use Chat Gpt To Write An Essay With Ease

    chat gpt for scholarship essays

VIDEO

  1. How do you use ChatGPT in academic writing?

  2. Chat GPT Writing a Book Review: How We Did It

  3. Uncovering the Truth: Chat GPT Essays and Plagiarism

  4. How to write study plan using chat gpt

  5. Using ChatGPT to level up your grant writing. #chatgpt #grant #academia #phdlife #phd #phdadvice

  6. Ace Your School Projects with Chat GPT!

COMMENTS

  1. Using ChatGPT to Write a College Essay

    Examples: Using ChatGPT to generate an essay outline. Provide a very short outline for a college admission essay. The essay will be about my experience working at an animal shelter. The essay will be 500 words long. Introduction. Hook: Share a brief and engaging anecdote about your experience at the animal shelter.

  2. How to Write an Essay with ChatGPT

    For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative, descriptive, expository, or narrative). You can also mention any facts or viewpoints you've gathered that should be incorporated into the output.

  3. Should I Use ChatGPT to Write My Essays?

    It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data. ... Generate ideas for essays. Have ChatGPT help you come up with ideas for essays. For example, input specific prompts ...

  4. Can You Use ChatGPT for Your College Essay?

    Using expert resources during the essay writing process can make a big difference, too. Our article outlines a complete list of strategies for students writing college admission essays. It breaks down what the Common Application essay is, gives tips for choosing the best essay topic, offers strategies for staying focused and being specific, and ...

  5. Can ChatGPT get into Harvard? We tested its admissions essay

    We tested its admissions essay. By Pranshu Verma and. Rekha Tenjarla. Jan. 8 at 6:00 a.m. ChatGPT's release a year ago triggered a wave of panic among educators. Now, universities are in the ...

  6. Using ChatGPT for Assignments

    Using ChatGPT for Assignments | Tips & Examples. Published on February 13, 2023 by Jack Caulfield and Tobias Solis. Revised on November 16, 2023. People are still figuring out the best use cases for ChatGPT, the popular chatbot based on a powerful AI language model.This article provides some ideas for how to use ChatGPT and other AI tools to assist with your academic writing.

  7. Should Students Let ChatGPT Help Them Write Their College Essays?

    In August, Ms. Barber assigned her 12th-grade students to write college essays. This week, she held class discussions about ChatGPT, cautioning students that using A.I. chatbots to generate ideas ...

  8. 5 ways ChatGPT can help you write an essay

    1. Use ChatGPT to generate essay ideas. Before you start writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that gives them ...

  9. Three ways ChatGPT helps me in my academic writing

    For example, you might write: "I'm writing a paper on [topic] for a leading [discipline] academic journal. What I tried to say in the following section is [specific point].

  10. How to Grade Essays with ChatGPT

    For a single essay, we can simply ask ChatGPT to grade as follows: For multiple essays, we could request ChatGPT to grade each one individually. However, when dealing with a large number of essays (e.g., 50, 100, 1000, etc.), manually grading them in this way becomes a laborious and time-consuming task.

  11. Here's How Forbes Got The ChatGPT AI To Write 2 College Essays In 20

    Each of the college admissions essays took less than 10 minutes to complete. Read our story about ChatGPT's capacity to write college applications here . Prompt #1, The Common App:

  12. Should I Use ChatGPT for My Essays?

    Many students wonder if chat gpt good at writing essays. The quick answer is - no, you shouldn't because chatbots have several setbacks. ... is a seasoned educational writer focusing on scholarship guidance, research papers, and various forms of academic essays including reflective and narrative essays. His expertise also extends to ...

  13. How To Write a Scholarship Application With ChatGPT

    Paste the prompt below into the chatbox and hit enter. Write an outline for a scholarship application demonstrating academic ability, leadership capability, volunteer work, and work experience. Below is the response from ChatGPT. We will follow this structure when generating the remaining paragraphs. I. Introduction.

  14. I used ChatGPT to polish my essay, what are the consequences?

    You indicated that you used ChatGPT to polish up your essay. Although you did not say to what degree this polishing up is. What is instructive is that you're not saying you wrote your essay with ChatGPT: in essence, it was not written by ChatGPT from ground up. In the circumstances, I'll strongly iterate that.

  15. A large-scale comparison of human-written versus ChatGPT-generated essays

    The corpus features essays for 90 topics from Essay Forum 42, an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get ...

  16. Will ChatGPT Affect the College Application Essay?

    The Benefits of ChatGPT. ChatGPT is a machine learning model that is trained on a large dataset of text, allowing it to generate human-like responses to prompts. This technology has many potential applications, including writing. With the ability to generate coherent and grammatically correct sentences, ChatGPT could be used to assist students ...

  17. How to Use ChatGPT to Write a Statement of Purpose

    The GPT-3.5 architecture powers ChatGPT, an advanced language model created by OpenAI. Its capacity to produce logical writing, comprehend context, and offer original ideas makes it a priceless tool for honing your Statement of Purpose. Here are several ways to use ChatGPT at different phases of creating a SOP:

  18. ChatGPT

    Get started with ChatGPT today. View pricing plans. Free. Assistance with writing, problem solving and more. Access to GPT-4o mini. Limited access to GPT-4o. Limited access to advanced data analysis, file uploads, vision, web browsing, and image generation. Use custom GPTs. $0/ month.

  19. I used ChatGPT to help me win a scholarship. : r/ChatGPT

    Share. UncleObamasBanana. • 1 yr. ago. I totally used chatgpt to right a scholarship letter for my friend and just inputted the the personal details in the blanks provided like a madlib and he just told me he got the $1500 scholarship. 😂 I didn't tell him until after he told me he won. Super stoked how simple and easy it was.

  20. How to Write an Essay with ChatGPT

    Writing a research question. You can use ChatGPT to brainstorm potential research questions or to narrow down your thesis statement. Begin by inputting a description of the research topic or assigned question. Then include a prompt like "Write 3 possible research questions on this topic".

  21. How good is ChatGPT at college admissions essays? Case study ...

    The prompt specified approximately 700 words, and the output was 654. I find ChatGPT tends to produce shorter essays than requested. The UChicago essay has no word limit, but I figured 600-700 words would be long enough. Successful essays tend to be in that range, maybe a little longer. 800 is probably too long for an essay like this.

  22. Use ChatGPT for scholarship application & write essays

    Use ChatGPT to write scholarships essays, personal statements, CV, resume to apply for scholarships #chatgpt #chatgptexamples #openai #scholarships #scholars...

  23. Using ChatGPT for academic essays : r/ChatGPTPro

    In this video, I show you how to write an essay using chart GPT as your assistant. The video is divided into chapters, making it easy for you to navigate through it. I aim to keep it brief and productive for you. I start the chart by giving it a prompt to explain the context of our conversation.