Is Scispace’s Deep Review Feature Useful?
Can we use magic to fight magic, and break the curse of information overload?
Pain Points
When it comes time to write their theses, students always feel a particular headache, because they have to face so many papers. It often happens that after spending half a day reading, they realize that the paper is completely irrelevant to their research direction. This feeling is really frustrating and demoralizing.
It’s not just students, teachers are the same. When writing papers or preparing projects, they have to go through a lot of relevant literature. The academic literature of each discipline is like a spring, constantly pouring out, and there’s no stopping it. In many cases, you don’t even know what new literature is emerging in the world at this moment, and where to find it.
This is a typical problem of information overload. Therefore, academia has long hoped for a tool to automate the literature review. Recently, software like Gemini Deep Research has started the trend, and many tools have begun to launch deep research functions. My friend, Mr. Zhaosaipo, also jokingly posted a picture with the caption “rampant,” which vividly points out the current situation.
I have also tried some similar tools myself and found that there are still quite a few problems. To do this function well, it is not enough to have literature resources. You also need to look at the processing power and information retrieval capabilities of the calling model. All three of these must be at a professional level.
Recently, Scispace launched a deep review feature, so I wanted to try it out and see if it could be useful. This article is about my real experience, and I hope it can give you some reference.
Testing
Let’s start with the starting point of my test. I threw a question to Scispace:
How can AI help the students in higher education?
I know this question is a bit general and not specific enough, but it can serve as a starting point for AI to help me sort out my thoughts.
After inputting, Scispace immediately and automatically listed some related but more specific questions for me to choose from.
I took a look and felt that the third one was particularly to my liking:
Can AI-based personalized feedback and assessment tools enhance student engagement and motivation in higher education?
Why do I like this one? As a teacher, I am particularly concerned about whether students can receive personalized guidance through AI. If it can also stimulate their learning enthusiasm, that would be even better. I clicked “Refine for me,” and Scispace took me to the next page.
Then, Scispace gave me a few questions, hoping that I could further clarify my needs.
I answered:
We are primarily interested in whether students are motivated to proactively engage in pre-class preparation, post-class review, homework, and asking good questions, among other learning activities. The specific format is flexible, provided it is AI-driven.
With my answer, Scispace further refined the question to:
Investigate if AI-based personalized feedback and assessment tools enhance student motivation to engage in pre-class preparation, post-class review, homework, and asking questions in higher education.
It’s not over yet. Scispace then asked:
To refine the query further, please consider the following questions:
- Are there specific metrics or indicators you are interested in measuring to assess student motivation and engagement in pre-class preparation and other learning activities?
- Is there a particular educational discipline or field in higher education where you would like to focus the use of AI-driven tools for enhancing student engagement?
This means that if you have more specific needs, you can follow the ideas it provides to dig deeper. However, I think the current topic is already quite close to my idea, so I directly clicked submit.
Processing
After submission, the Scispace processing process was clearly displayed.
Actually, what Scispace does is pretty similar to how we manually conduct literature reviews. First, it searches for relevant papers based on the question, then filters out the irrelevant ones.
It searched a total of 1750 papers and initially identified 325 relevant ones. To avoid missing important information, it found an additional 72 papers through citation links.
Then, it ranked these papers by relevance.
After ranking, Scispace selected the 20 most relevant papers and generated a literature review.
Report
The report is presented in this format.
The references use the “author-year” APA standard format, and each one has a clickable link that leads directly to the original text. Hovering the mouse over it also provides a preview of basic information.
From the results, the selected papers are very recent, mostly from 2023 onwards, which is after generative AI became popular with ChatGPT. You can also adjust the format below the results, such as changing from APA format to numerical format, or turning paragraphs into lists.
You can try these adjustments yourself.
Language
You might be wondering, the previous dialogue was all in English, so why is the report in Chinese? Actually, it’s because there’s a language option in the upper right corner of the interface, and Chinese is selected by default.
If I click on English, the result immediately switches to English.
Because it’s just changing the language, there’s no need to redo the previous steps, so the switch is very fast.
Look, here’s the Japanese version. Although I can’t read it, I’m still very happy with how easy it is to create.
Limitations
What do you think after seeing the previous demo?
I think there’s a slight disappointment. Scispace found over 300 relevant papers, but only used 20 to generate the review, which is still a bit too few. It’s probably due to considerations of model capabilities and cost.
In terms of accuracy, these 20 papers were well-chosen, but if you want to comprehensively cover the entire field, it’s a bit of a stretch. However, compared to the past when we could only extract content from 5 to 8 papers, this approach of first casting a wide net and then filtering, ranking, and focusing has already improved a lot.
Besides, as long as we break down the problem into smaller parts, a big question can be divided into several smaller ones and run multiple times, and the combined results can still be very comprehensive. Right?
Try It
If you want to try it, you can use this link to register and experience Scispace’s deep review function. If you plan to subscribe to Scispace advanced, you can use my discount code SHUDR40 for a 40% discount on an annual subscription, or SHUDR20 for a 20% discount on a monthly subscription.
Browser Control
Scispace has been quite active recently. In addition to deep review, they have also launched a browser control function, somewhat similar to OpenAI Operator. There is an official introduction video, you can click this link to watch it.
At first glance, you might find it strange that “literature review synthesis tool” and “browser control function” seem to be completely unrelated. Why would SciSpace, as a literature review synthesis tool, try to emulate OpenAI and create a browser control function?
Actually, I only need to mention one keyword, and you will immediately understand the reasoning — “access rights”.
Currently, even if you use SciSpace, you can retrieve many documents, but it is still only a part of all the literature, and it is not comprehensive. This is not SciSpace’s fault. The paywall problem in academia has a long history. We will not discuss the rights and wrongs here. However, your university or research institution has likely already paid to subscribe to more literature databases. In this way, by controlling the browser to search, SciSpace can use your IP address permissions to help you obtain the full text of the literature that your institution has access to, allowing you to highlight the advantages of your academic data sources.
You tell me, isn’t this feature useful?
If you are interested, you can click this link to join the Waitlist.
Summary
In this article, I talked to you about Scispace’s deep review function. Through testing, I think it can indeed help us improve the efficiency of reviewing literature and grasping the academic frontier. I hope that in the future, it can integrate more literature when generating reports, and the display methods can be further enriched.
I wish you all the best in using AI to assist with literature reviews.
If you find this article useful, please hit the Applaud
button.
If you think this article might be helpful to your friends, please share it with them.
Feel free to follow my column to receive timely updates.
Welcome to subscribe to my Patreon column to access exclusive articles for paid users.
To watch video content, please subscribe to my Youtube channel.
My Twitter: @wshuyi