TextLens API for Python
Text quality metrics in one requests call. No NLTK corpus downloads, no spaCy language models, no GPU requirements. Works anywhere Python runs.
import requests
response = requests.post(
"https://api.ckmtools.dev/v1/analyze",
json={"text": "Your text here"}
)
data = response.json()
print(data["readability"]["flesch_kincaid_grade"]) # 8.2
print(data["sentiment"]["label"]) # "positive"
print(data["keywords"][:3]) # ["text", "analysis", "pipeline"]
You're on the list.
We'll email you when access opens.
Why Python text analysis is painful
Installing spaCy means downloading 500MB+ language models before you can run anything. NLTK requires corpus downloads that break silently in CI — you discover the failure at runtime, not install time. Transformers need CUDA or tolerate 30-second cold starts on CPU. Every option adds a new system dependency to manage across environments, containers, and Lambda layers.
For common tasks — readability scoring, basic sentiment, keyword extraction — these dependencies are overkill. You just want a number, not a model pipeline with a Docker layer budget.
TextLens API wraps these operations in a single REST endpoint. Your code calls requests.post() and gets back a JSON object with all the metrics. No models installed locally, no corpus downloads, no GPU. Your CI pipeline stays lean.
Works in Jupyter notebooks, pandas pipelines, FastAPI services, AWS Lambda functions, and CI scripts equally — anywhere requests runs.
Drop it into your Python workflow
Real integration patterns for the most common Python environments.
import pandas as pd
import requests
def analyze_text(text):
r = requests.post("https://api.ckmtools.dev/v1/analyze", json={"text": text})
data = r.json()
return {
"grade_level": data["readability"]["flesch_kincaid_grade"],
"sentiment": data["sentiment"]["label"],
"keywords": ", ".join(data["keywords"][:5])
}
df = pd.read_csv("articles.csv")
df[["grade_level", "sentiment", "keywords"]] = df["body"].apply(
lambda t: pd.Series(analyze_text(t))
)
from fastapi import FastAPI
import requests
app = FastAPI()
@app.post("/analyze")
async def analyze(text: str):
r = requests.post("https://api.ckmtools.dev/v1/analyze", json={"text": text})
return r.json()
import requests, json
text = "Paste any text here and get instant readability metrics."
r = requests.post("https://api.ckmtools.dev/v1/analyze", json={"text": text})
print(json.dumps(r.json(), indent=2))
What you get back
All metrics in a single call. No parsing libraries needed — just response.json().
{
"readability": {
"flesch_kincaid_grade": 8.2,
"flesch_reading_ease": 67.3,
"gunning_fog_index": 10.1,
"smog_index": 9.4,
"automated_readability_index": 8.7
},
"sentiment": {
"label": "positive",
"score": 0.73
},
"keywords": ["text", "analysis", "pipeline", "readability", "api"],
"structure": {
"sentence_count": 12,
"avg_sentence_length": 18.4,
"word_count": 221,
"paragraph_count": 4
}
}
Join the early access list
TextLens API is in development. Python SDK in early access includes requests-compatible helper methods and pandas integration examples. Join the waitlist and we'll email you when access opens.
You're on the list.
We'll email you when access opens.
Data engineering pipelines? See pandas/DataFrame examples →