CtrlK
BlogDocsLog inGet started
Tessl Logo

conflict-of-interest-checker

Check for co-authorship conflicts between authors and suggested reviewers

68

1.94x
Quality

52%

Does it follow best practices?

Impact

99%

1.94x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/conflict-of-interest-checker/SKILL.md
SKILL.md
Quality
Evals
Security

Conflict of Interest Checker

Reviewer conflict detection tool.

Use Cases

  • Journal submission prep
  • Editorial decisions
  • Peer review integrity
  • Compliance verification

Parameters

ParameterTypeDefaultRequiredDescription
--authors, -astring-YesComma-separated author names
--reviewers, -rstring-YesComma-separated reviewer names
--publications, -pstring-NoCSV file with publication records

CSV Format

author,reviewer,paper_id
Smith,Brown,paper1
Smith,Jones,paper2

Usage

# Check with demo data
python scripts/main.py --authors "Smith,Jones,Lee" --reviewers "Brown,Davis,Wilson"

# Check with publication records
python scripts/main.py --authors "Smith,Jones" --reviewers "Brown,Davis" --publications pubs.csv

Returns

  • Conflict flagging (coauthorship, institutional)
  • Shared publication list
  • Recommendation: Accept/Recuse
  • Alternative reviewer suggestions

Example Output

⚠ Found 2 potential conflict(s):

1. COAUTHORSHIP CONFLICT
   Reviewer: Brown
   Author: Smith
   Shared papers: paper1

2. COAUTHORSHIP CONFLICT
   Reviewer: Wilson
   Author: Smith
   Shared papers: paper2

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

No additional Python packages required.

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support
Repository
aipoch/medical-research-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.