CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-micro

Asynchronous HTTP microservices framework with built-in body parsing and error handling

Pending
Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Pending

The risk profile of this skill

Overview
Eval results
Files

cli.mddocs/

Command Line Interface

CLI binary for running microservices with support for multiple endpoint types and configuration options. The micro command provides flexible deployment options for containerized and traditional environments.

Capabilities

Micro CLI Command

Main binary for running microservice files with flexible endpoint configuration and automatic file discovery.

micro [options] [entry_point.js]

# Default behavior - listens on port 3000, uses package.json main or index.js
micro

# Specify entry point file
micro ./my-service.js

# Custom port
micro -l tcp://0.0.0.0:8080

# Multiple endpoints
micro -l tcp://0.0.0.0:3000 -l unix:/tmp/micro.sock

# Environment variable port with fallback
micro -l tcp://0.0.0.0:${PORT-3000}

Command Options

Options:
  --help                    Show help message and usage information
  -v, --version            Display current version of micro
  -l, --listen <uri>       Specify listen URI (can be used multiple times)

Endpoint Types

Micro supports three types of endpoints for different deployment scenarios:

# TCP endpoints (traditional host:port)
micro -l tcp://hostname:port
micro -l tcp://0.0.0.0:3000
micro -l tcp://localhost:8080

# UNIX domain sockets (for inter-process communication)
micro -l unix:/path/to/socket.sock
micro -l unix:/tmp/microservice.sock

# Windows named pipes (Windows environments)
micro -l pipe:\\.\pipe\PipeName
micro -l pipe:\\.\pipe\MicroService

Usage Examples:

# Basic usage - default port 3000
micro

# Production server with custom port
micro -l tcp://0.0.0.0:8080 ./app.js

# Development with environment variable
export PORT=3001
micro -l tcp://0.0.0.0:$PORT

# Multiple listening endpoints
micro -l tcp://0.0.0.0:3000 -l unix:/tmp/api.sock ./service.js

# Container deployment
micro -l tcp://0.0.0.0:${PORT-3000} ./dist/server.js

# Local development with specific file
micro ./my-microservice.js

# Windows named pipe
micro -l pipe:\\.\pipe\MyAPIService ./api.js

Entry Point Discovery

When no entry point file is specified, micro follows this discovery order:

  1. package.json main field: Uses the main property from package.json
  2. index.js fallback: Uses index.js in current directory if no main field
  3. Error handling: Exits with error if no valid entry point found
# These commands follow the discovery process:
micro                    # Uses package.json main or index.js
micro -l tcp://0.0.0.0:8080  # Same discovery with custom port

Example package.json configurations:

{
  "name": "my-service",
  "main": "./dist/server.js",
  "scripts": {
    "start": "micro",
    "dev": "micro ./src/dev-server.js"
  }
}

Environment Integration

# Using environment variables for configuration
export PORT=8080
export HOST=0.0.0.0
micro -l tcp://$HOST:$PORT

# Docker container usage
FROM node:16
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["micro", "-l", "tcp://0.0.0.0:3000"]

# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
      - name: microservice
        image: my-service:latest
        command: ["micro", "-l", "tcp://0.0.0.0:8080"]
        ports:
        - containerPort: 8080

Process Management

The CLI includes built-in process management features:

  • Graceful shutdown: Handles SIGINT and SIGTERM signals with cleanup
  • Error handling: Logs startup errors with helpful error codes to https://err.sh/micro/
  • Port binding: Automatically handles port binding errors and validation
  • File validation: Validates entry point files exist and are readable
  • Module loading: Uses dynamic imports with ES6 module support and fallback handling
  • Export validation: Ensures the loaded module exports a function as the request handler
# The CLI handles these scenarios automatically:
micro ./non-existent.js     # Error: The file or directory "non-existent.js" doesn't exist!
micro ./invalid-module.js   # Error: The file "invalid-module.js" does not export a function.
micro -l tcp://0.0.0.0:80   # Error: permission denied (if not root)
micro -l invalid://test      # Error: Unknown --listen endpoint scheme (protocol): invalid:

Error Handling Behavior:

  • All CLI errors are logged to stderr with descriptive messages
  • Error URLs point to https://err.sh/micro/{error-code} for additional help
  • Module import errors include full stack traces for debugging
  • Process exits with code 1 for startup failures, code 2 for usage errors

Integration Examples

Docker Compose

version: '3.8'
services:
  api:
    build: .
    command: micro -l tcp://0.0.0.0:3000
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production

  worker:
    build: .
    command: micro -l unix:/tmp/worker.sock ./worker.js
    volumes:
      - /tmp:/tmp

Systemd Service

[Unit]
Description=Micro API Service
After=network.target

[Service]
Type=simple
User=microservice
WorkingDirectory=/opt/microservice
ExecStart=/usr/bin/node /usr/local/bin/micro -l tcp://0.0.0.0:3000
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Process Manager (PM2)

{
  "apps": [{
    "name": "micro-api",
    "script": "/usr/local/bin/micro",
    "args": ["-l", "tcp://0.0.0.0:3000", "./api.js"],
    "instances": 4,
    "exec_mode": "cluster",
    "env": {
      "NODE_ENV": "production"
    }
  }]
}

Nginx Reverse Proxy

upstream microservice {
    server unix:/tmp/micro.sock;
}

server {
    listen 80;
    server_name api.example.com;
    
    location / {
        proxy_pass http://microservice;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

docs

body-parsing.md

cli.md

error-management.md

index.md

response-handling.md

server.md

tile.json