Asynchronous HTTP microservices framework with built-in body parsing and error handling
—
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Pending
The risk profile of this skill
CLI binary for running microservices with support for multiple endpoint types and configuration options. The micro command provides flexible deployment options for containerized and traditional environments.
Main binary for running microservice files with flexible endpoint configuration and automatic file discovery.
micro [options] [entry_point.js]
# Default behavior - listens on port 3000, uses package.json main or index.js
micro
# Specify entry point file
micro ./my-service.js
# Custom port
micro -l tcp://0.0.0.0:8080
# Multiple endpoints
micro -l tcp://0.0.0.0:3000 -l unix:/tmp/micro.sock
# Environment variable port with fallback
micro -l tcp://0.0.0.0:${PORT-3000}Options:
--help Show help message and usage information
-v, --version Display current version of micro
-l, --listen <uri> Specify listen URI (can be used multiple times)Micro supports three types of endpoints for different deployment scenarios:
# TCP endpoints (traditional host:port)
micro -l tcp://hostname:port
micro -l tcp://0.0.0.0:3000
micro -l tcp://localhost:8080
# UNIX domain sockets (for inter-process communication)
micro -l unix:/path/to/socket.sock
micro -l unix:/tmp/microservice.sock
# Windows named pipes (Windows environments)
micro -l pipe:\\.\pipe\PipeName
micro -l pipe:\\.\pipe\MicroServiceUsage Examples:
# Basic usage - default port 3000
micro
# Production server with custom port
micro -l tcp://0.0.0.0:8080 ./app.js
# Development with environment variable
export PORT=3001
micro -l tcp://0.0.0.0:$PORT
# Multiple listening endpoints
micro -l tcp://0.0.0.0:3000 -l unix:/tmp/api.sock ./service.js
# Container deployment
micro -l tcp://0.0.0.0:${PORT-3000} ./dist/server.js
# Local development with specific file
micro ./my-microservice.js
# Windows named pipe
micro -l pipe:\\.\pipe\MyAPIService ./api.jsWhen no entry point file is specified, micro follows this discovery order:
main property from package.json# These commands follow the discovery process:
micro # Uses package.json main or index.js
micro -l tcp://0.0.0.0:8080 # Same discovery with custom portExample package.json configurations:
{
"name": "my-service",
"main": "./dist/server.js",
"scripts": {
"start": "micro",
"dev": "micro ./src/dev-server.js"
}
}# Using environment variables for configuration
export PORT=8080
export HOST=0.0.0.0
micro -l tcp://$HOST:$PORT
# Docker container usage
FROM node:16
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["micro", "-l", "tcp://0.0.0.0:3000"]
# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: microservice
image: my-service:latest
command: ["micro", "-l", "tcp://0.0.0.0:8080"]
ports:
- containerPort: 8080The CLI includes built-in process management features:
# The CLI handles these scenarios automatically:
micro ./non-existent.js # Error: The file or directory "non-existent.js" doesn't exist!
micro ./invalid-module.js # Error: The file "invalid-module.js" does not export a function.
micro -l tcp://0.0.0.0:80 # Error: permission denied (if not root)
micro -l invalid://test # Error: Unknown --listen endpoint scheme (protocol): invalid:Error Handling Behavior:
version: '3.8'
services:
api:
build: .
command: micro -l tcp://0.0.0.0:3000
ports:
- "3000:3000"
environment:
- NODE_ENV=production
worker:
build: .
command: micro -l unix:/tmp/worker.sock ./worker.js
volumes:
- /tmp:/tmp[Unit]
Description=Micro API Service
After=network.target
[Service]
Type=simple
User=microservice
WorkingDirectory=/opt/microservice
ExecStart=/usr/bin/node /usr/local/bin/micro -l tcp://0.0.0.0:3000
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target{
"apps": [{
"name": "micro-api",
"script": "/usr/local/bin/micro",
"args": ["-l", "tcp://0.0.0.0:3000", "./api.js"],
"instances": 4,
"exec_mode": "cluster",
"env": {
"NODE_ENV": "production"
}
}]
}upstream microservice {
server unix:/tmp/micro.sock;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://microservice;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}