This chapter corresponds to code in the researcherRAG repository
View CLI Source
๐Ÿ”

Context Scanning

โญ Beginnerโฑ๏ธ 5-10 minutes๐Ÿ“‹ Vector DB required

Get a high-level overview of your literature corpus. Ideal for initial exploration when you need to understand the structure, themes, and methodological approaches in your database.

๐Ÿ“‹ When to Use This Prompt

  • โœ“ Starting your literature analysis (Stage 6, Session 1)
  • โœ“ Need to understand corpus coverage and scope
  • โœ“ Want to identify main themes before deep diving
  • โœ“ Preparing for targeted follow-up queries

Prompt Template

Copy this prompt and paste it into your RAG interface:

Analyze the papers in my database and provide a structured overview of:
1. Core themes and topics
2. Methodological approaches
3. Key findings and outcomes

Organize the response hierarchically with citations and page ranges for each category.

Expected Output Structure

Your RAG system should return something like this:

## Literature Overview (137 papers, 2010-2024)

### 1. Core Themes and Topics

**Technology Adoption (45 papers, 32.8%)**
- Adoption barriers and facilitators [Smith, 2023; Jones et al., 2022]
- Implementation strategies [Lee & Kim, 2021]
- Organizational readiness [Martinez, 2020]

**User Acceptance and Attitudes (38 papers, 27.7%)**
- TAM/UTAUT applications [Davis et al., 2023]
- User resistance and change management [Park, 2022]

**Implementation Outcomes (32 papers, 23.4%)**
- Success metrics and KPIs [Wilson, 2024]
- Long-term sustainability [Chen & Liu, 2021]

### 2. Methodological Approaches

**Quantitative (67 papers, 48.9%)**
- Surveys with structural equation modeling (35 papers)
- Experimental studies (RCT) (18 papers)
- Longitudinal tracking (14 papers)

**Qualitative (45 papers, 32.8%)**
- Case studies (25 papers)
- Interviews and focus groups (20 papers)

**Mixed Methods (25 papers, 18.3%)**
- Convergent design (15 papers)
- Explanatory sequential (10 papers)

### 3. Key Findings and Outcomes

**Positive Outcomes**
- 73% of studies report improved efficiency [Smith, 2023, p.45]
- Cost reduction in 62% of cases [Johnson, 2022, p.112]

**Negative Outcomes or Challenges**
- Initial resistance in 81% of implementations [Lee, 2021, p.67]
- Training costs underestimated in 45% [Martinez, 2020, p.89]

**Conflicting Results**
- User satisfaction: Mixed results (positive: 55%, negative: 25%, neutral: 20%)
- Long-term adoption: Varies by sector [Chen, 2021]

Customization Options

Modify the prompt to focus on specific aspects:

Focus on Specific Time Period
Analyze the papers published AFTER 2020 in my database and provide...
(rest of prompt remains the same)
Compare Two Time Periods
Compare papers published BEFORE 2020 vs AFTER 2020:
1. How have core themes evolved?
2. Have methodological approaches shifted?
3. What new findings emerged in recent years?

Organize side-by-side with citations.
Focus on Methodology Only
Analyze the methodological approaches in my database:
1. Quantitative vs Qualitative vs Mixed Methods (distribution)
2. Specific research designs used (RCT, case study, survey, etc.)
3. Sample sizes and populations
4. Data analysis techniques

For each category, provide counts and example citations.
Geographic or Contextual Focus
Analyze the papers by geographic region or context:
1. Core themes by region (Asia, Europe, North America, etc.)
2. Methodological preferences by region
3. Key findings specific to each region

Highlight regional differences and similarities.

Common Follow-up Questions

After getting your overview, dig deeper with these follow-up prompts:

  • Q: "Which papers are most cited in my database?"
  • Q: "Show me 5 representative papers for each theme"
  • Q: "What are the most common limitations mentioned?"
  • Q: "Which authors have published multiple papers in my corpus?"
  • Q: "What gaps or future research directions are suggested?"

Pro Tips

๐Ÿ’ก Start Broad

Use this prompt as your first query in any research session. It gives you a mental map of your corpus before diving into specific questions.

๐Ÿ“Š Export & Save

Save the overview output to a markdown file. Use it as a reference document throughout your analysis to avoid repeating this query.

๐Ÿ” Verify Counts

The AI may not have perfect counts. Spot-check by asking "How many papers mention [specific term]?" to validate the overview.

๐ŸŽฏ Use for Reporting

This overview format is perfect for the "Corpus Description" section of your systematic review manuscript.