Python compatibility wrapper for computing string edit distances and similarities using fast Levenshtein algorithms.
88
Build a tool that analyzes the similarity between different versions of document drafts by comparing them as sequences of words. The tool should preserve the order of words when computing similarity, making it suitable for tracking how much documents have changed across revisions.
@generates
def compute_similarity(doc1: list[str], doc2: list[str]) -> float:
"""
Computes similarity between two documents represented as word sequences.
Args:
doc1: First document as a list of words
doc2: Second document as a list of words
Returns:
float: Similarity score between 0.0 (completely different) and 1.0 (identical)
"""
pass
def find_most_similar(target: list[str], candidates: list[list[str]]) -> int:
"""
Finds the index of the most similar document from a list of candidates.
Args:
target: Target document as a list of words
candidates: List of candidate documents, each as a list of words
Returns:
int: Index of the most similar candidate document
"""
passProvides string similarity algorithms including sequence comparison.
Install with Tessl CLI
npx tessl i tessl/pypi-python-levenshteindocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10