CtrlK
BlogDocsLog inGet started
Tessl Logo

scholar-evaluation

Systematically evaluate scholarly work using the ScholarEval framework, providing structured assessment across research quality dimensions including problem formulation, methodology, analysis, and writing with quantitative scoring and actionable feedback.

Install with Tessl CLI

npx tessl i github:K-Dense-AI/claude-scientific-skills --skill scholar-evaluation
What are skills?

77

1.67x

Does it follow best practices?

Evaluation92%

1.67x

Agent success when using this skill

Validation for skill structure

SKILL.md
Review
Evals

Evaluation results

100%

47%

Research Paper Pre-Screening for Journal Submission

Programmatic score calculation with 8-dimension framework

Criteria
Without context
With context

All 8 dimensions present

41%

100%

5-point scale used

100%

100%

Correct score JSON keys

0%

100%

calculate_scores.py invoked

66%

100%

Output report produced

50%

100%

Quality threshold interpretation

0%

100%

Dimension weights reflected

0%

100%

Scores within valid range

100%

100%

Evaluation covers paper content

100%

100%

Workflow steps documented

90%

100%

Without context: $0.7062 · 3m 11s · 30 turns · 146 in / 11,903 out tokens

With context: $0.8114 · 2m 27s · 26 turns · 11,772 in / 7,993 out tokens

83%

38%

Manuscript Feedback for Revision Cycle

Qualitative per-dimension assessment and overall synthesis

Criteria
Without context
With context

Per-dimension strengths count

0%

58%

Per-dimension weaknesses count

0%

83%

Overall major strengths 3-5

0%

100%

Overall critical weaknesses 3-5

40%

20%

Prioritized recommendations present

100%

100%

Specific section references

100%

100%

Actionable suggestions

100%

100%

Balanced tone

25%

100%

Evidence-based grounding

60%

80%

Overall quality assessment present

37%

100%

Without context: $0.2278 · 1m 37s · 11 turns · 16 in / 4,622 out tokens

With context: $0.8728 · 3m 21s · 24 turns · 11,562 in / 10,283 out tokens

94%

26%

Thesis Chapter Evaluation for Doctoral Committee Review

Contextual evaluation adjustments and structured feedback format

Criteria
Without context
With context

Work type identified

87%

100%

Student/educational focus

90%

100%

Stage-appropriate standards

90%

100%

Output named SCHOLAR_EVALUATION.md

0%

100%

Discipline-specific norms applied

80%

100%

Venue-adjusted standards

50%

100%

8 dimensions covered

40%

100%

Publication readiness section

60%

40%

Priority recommendations ranked

100%

100%

Does not use peer review tone

100%

100%

Without context: $0.2325 · 1m 42s · 10 turns · 14 in / 4,543 out tokens

With context: $0.8907 · 3m 44s · 24 turns · 6,973 in / 11,291 out tokens

Evaluated
Agent
Claude Code
Model
Unknown

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.