Writes, optimizes, and debugs T-SQL queries. Explains SQL Server internals, troubleshoots performance issues, and guides database administration tasks including backup/restore, high availability, security, and index design. Use when the user asks about T-SQL syntax, SQL Server administration, query performance, stored procedures, indexes, locking, transactions, backup/restore, high availability, security, or any MSSQL-related topic — even without saying 'SQL Server' explicitly. Also trigger on terms like SSMS, tempdb, bcp, sqlcmd, MSSQL, sp_executesql, NOLOCK, columnstore, Hekaton, RCSI, param sniffing, or execution plan.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
SQL Server CLI tools, bulk load utilities, and PowerShell automation reference for developers and DBAs.
| Task | Tool |
|---|---|
| Script execution, CI/CD automation | sqlcmd |
| Fast data export/import between files and SQL Server | bcp |
| Schema deploy/diff, dacpac/bacpac packaging | sqlpackage |
| Interactive SSMS-like terminal | mssql-cli |
| Bulk load from a file on the SQL Server machine | BULK INSERT |
| Bulk load from a file accessible to the SQL Engine | OPENROWSET BULK |
| PowerShell automation, backup/restore, SSAS/SSRS mgmt | SqlServer module |
| Tool | Ships With | Platform | Auth | Notes |
|---|---|---|---|---|
sqlcmd (legacy) | SQL Server tools | Win/Linux/macOS | SQL + Windows | Scriptable, -v variables, batch mode |
sqlcmd (go) | Standalone install | Win/Linux/macOS | SQL + Windows + AAD | v21+, replaces legacy |
bcp | SQL Server tools | Win/Linux | SQL + Windows | Bulk copy program |
sqlpackage | Standalone DacFx | Win/Linux/macOS | SQL + Windows + AAD | Schema deploy/diff |
mssql-cli | pip install | Win/Linux/macOS | SQL + Windows | Interactive REPL |
SqlServer module | Install-Module | Win/Linux | SQL + Windows | PowerShell automation |
# Run a script file
sqlcmd -S myserver -d mydb -U sa -P 'pass' -i script.sql
# Run inline query
sqlcmd -S myserver -d mydb -U sa -P 'pass' -Q "SELECT @@VERSION"
# Trusted Windows auth
sqlcmd -S myserver -d mydb -E -i script.sql
# Write output to file
sqlcmd -S myserver -d mydb -E -i script.sql -o output.txt
# No header, no row count (for parseable output)
sqlcmd -S myserver -d mydb -E -Q "SELECT name FROM sys.databases" -h -1 -W| Flag | Meaning |
|---|---|
-S server[\instance] | Target server (also tcp:host,port) |
-d database | Initial catalog |
-U login / -P password | SQL authentication |
-E | Windows integrated auth |
-i file.sql | Input script |
-o file.txt | Output file |
-Q "query" | Run query and exit |
-q "query" | Run query, stay in interactive mode |
-v name=value | Set scripting variable |
-b | Exit with error on batch failure |
-V errorlevel | Minimum severity to set exit code |
-h -1 | Suppress column headers |
-W | Remove trailing spaces |
-s , | Column separator (for CSV) |
-t seconds | Query timeout |
-l seconds | Login timeout |
| `-r [0 | 1]` |
-X | Disable commands: ED, !! (security hardening) |
-f codepage | Input/output codepage |
-u | Unicode output |
-m errorlevel | Suppress messages below this severity |
-- In script file: reference with $(VarName)
SELECT TOP $(TopN) name FROM sys.databases WHERE name = '$(DbName)';
GO
-- In error messages
:setvar MyVar "hello"
PRINT 'Value is $(MyVar)';
GO# Pass from command line
sqlcmd -S myserver -E -i script.sql -v TopN=10 DbName=AdventureWorks-- Abort the entire script on any error
:on error exit
-- Explicitly check SQLCMDMAXVARTYPEWIDTH
CREATE TABLE ##temp (id INT);
GO
IF @@ERROR <> 0
BEGIN
RAISERROR('Table creation failed', 16, 1);
END
GO-- GO N executes the batch N times
INSERT INTO dbo.TestRows (val) VALUES (NEWID());
GO 1000
-- GO with SQLCMD variable
INSERT INTO dbo.TestRows (val) VALUES (NEWID());
GO $(RowCount)#!/bin/bash
set -e # abort on error
sqlcmd \
-S "$DB_SERVER" \
-d "$DB_NAME" \
-U "$DB_USER" \
-P "$DB_PASS" \
-b \
-V 16 \
-i migrations/V001__create_schema.sql
echo "Migration applied successfully"[!NOTE] sqlcmd (go) v21+ The new Go-based
sqlcmdsupports Azure AD authentication (--authentication-method), interactive MFA (ActiveDirectoryInteractive), and JSON output (--format json). The legacy C-based sqlcmd is still available but receives only security fixes.
# Export: queryout (run a query, write to file)
bcp "SELECT * FROM AdventureWorks.HumanResources.Employee" queryout employees.dat -S myserver -T -c
# Export: out (entire table)
bcp AdventureWorks.HumanResources.Employee out employees.dat -S myserver -T -c
# Import: in
bcp AdventureWorks.dbo.EmployeeStage in employees.dat -S myserver -T -c
# Generate format file
bcp AdventureWorks.HumanResources.Employee format nul -S myserver -T -c -f employee.fmt
# Use existing format file on import
bcp AdventureWorks.dbo.EmployeeStage in employees.dat -S myserver -T -f employee.fmt| Flag | Meaning |
|---|---|
-S server | Target server |
-d database | Database (use fully-qualified name in object, or -d shorthand) |
-U login / -P password | SQL auth |
-T | Windows integrated auth |
-c | Character mode (UTF-8 friendly, field sep \t, row sep \n) |
-w | Unicode character mode (NCHAR columns) |
-n | Native mode (SQL Server binary, fast for SQL-to-SQL) |
-N | Wide-character native (native for non-char, unicode for char) |
-f format_file | Use format file |
-F first_row | Skip N rows at start |
-L last_row | Stop after row N |
-b batchsize | Rows per commit batch |
-h hints | Load hints: ORDER, ROWS_PER_BATCH, KILOBYTES_PER_BATCH, TABLOCK, CHECK_CONSTRAINTS, FIRE_TRIGGERS |
-t field_term | Field terminator (default \t) |
-r row_term | Row terminator (default \n) |
-e errorfile | Log bad rows to file |
-m maxerrors | Max errors before abort |
-q | Quoted identifiers (required for reserved word table names) |
-k | Keep NULLs (don't substitute column defaults) |
-E | Keep identity values from file |
-a packetsize | Network packet size (512–65535; larger improves throughput) |
| Mode | Flag | Use case | Notes |
|---|---|---|---|
| Character | -c | CSV/text, cross-platform | Field sep \t, row sep \n |
| Unicode char | -w | Unicode text data | Double the file size vs -c for ASCII |
| Native | -n | SQL Server → SQL Server | Fastest; not human-readable; includes data type metadata |
| Wide-char native | -N | Mixed char+binary | Native for binary, unicode for char columns |
# High-performance bulk load
bcp mydb.dbo.StagingTable in data.dat \
-S myserver \
-T \
-n \
-b 10000 \
-a 65535 \
-h "TABLOCK,ORDER(id ASC),ROWS_PER_BATCH=1000000"Minimal logging with bcp requires:
-h "TABLOCK" hint provided# Export with header row (sqlcmd handles this better than bcp)
sqlcmd -S myserver -T -Q "SET NOCOUNT ON; SELECT 'col1','col2'; SELECT col1, col2 FROM dbo.MyTable" \
-s "," -W -o output.csv[!NOTE]
bcpdoes not write column headers. Usesqlcmdorsqlpackagefor header-inclusive CSV export, or prepend a header row from the shell.
Format files describe the mapping between a data file's layout and a table's columns. They are required when:
Generated with:
bcp AdventureWorks.HumanResources.Employee format nul -S myserver -T -c -f employee_char.fmtExample output (employee_char.fmt):
14.0
13
1 SQLCHAR 0 12 "\t" 1 BusinessEntityID SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 50 "\t" 2 NationalIDNumber SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 1 "\t" 3 LoginID SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 50 "\t" 4 OrganizationNode SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 4 "\t" 5 OrganizationLevel SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 50 "\t" 6 JobTitle SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 24 "\t" 7 BirthDate SQL_Latin1_General_CP1_CI_AS
8 SQLCHAR 0 1 "\t" 8 MaritalStatus SQL_Latin1_General_CP1_CI_AS
9 SQLCHAR 0 1 "\t" 9 Gender SQL_Latin1_General_CP1_CI_AS
10 SQLCHAR 0 24 "\t" 10 HireDate SQL_Latin1_General_CP1_CI_AS
11 SQLCHAR 0 1 "\t" 11 SalariedFlag SQL_Latin1_General_CP1_CI_AS
12 SQLCHAR 0 4 "\t" 12 VacationHours SQL_Latin1_General_CP1_CI_AS
13 SQLCHAR 0 8 "\r\n" 13 SickLeaveHours SQL_Latin1_General_CP1_CI_ASFormat file column layout:
HostFileFieldOrder DataType PrefixLen FieldLen Terminator TableColOrder TableColName CollationTo skip a column (e.g., skip file column 2), set its TableColOrder to 0:
2 SQLCHAR 0 50 "\t" 0 "" ""<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<RECORD>
<FIELD ID="1" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="12"/>
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="100"/>
<FIELD ID="3" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="50"/>
</RECORD>
<ROW>
<COLUMN SOURCE="1" NAME="EmployeeID" xsi:type="SQLINT"/>
<COLUMN SOURCE="3" NAME="Department" xsi:type="SQLNVARCHAR"/>
<!-- SOURCE="2" skipped — not mapped to any ROW COLUMN -->
</ROW>
</BCPFORMAT>XML format files support:
<ROW> section)Loads a data file directly into a table. The file must be accessible to the SQL Server engine (local path or UNC share).
-- Basic character-mode load
BULK INSERT dbo.StagingTable
FROM 'C:\data\employees.csv'
WITH (
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2, -- skip header row
MAXERRORS = 10,
ERRORFILE = 'C:\data\errors.txt',
TABLOCK
);
-- Using a format file
BULK INSERT dbo.StagingTable
FROM '\\fileserver\share\employees.dat'
WITH (
FORMATFILE = '\\fileserver\share\employee.fmt',
TABLOCK,
BATCHSIZE = 10000,
CHECK_CONSTRAINTS,
FIRE_TRIGGERS
);
-- S3-compatible storage (SQL Server 2022+)
BULK INSERT dbo.StagingTable
FROM 's3://mybucket/data/employees.csv'
WITH (
CREDENTIAL = 'MyS3Credential',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2
);[!NOTE] SQL Server 2022
BULK INSERTsupports S3-compatible object storage URLs with a database-scoped credential. The credential must be created withCREATE DATABASE SCOPED CREDENTIAL.
| Option | Default | Notes |
|---|---|---|
BATCHSIZE | All rows | Rows per commit; larger = fewer commits but more log space |
CHECK_CONSTRAINTS | OFF | Validate FK/CHECK during load |
CODEPAGE | 'ACP' | For CHAR data: 'ACP', 'OEM', 'RAW', or numeric codepage |
DATAFILETYPE | 'char' | 'char', 'native', 'widechar', 'widenative' |
ERRORFILE | none | Row-level errors written here |
FIELDQUOTE | none | Quote character (2017+) |
FIELDTERMINATOR | '\t' | Column separator |
FIRE_TRIGGERS | OFF | Execute INSERT triggers during load |
FIRSTROW | 1 | Row number to start loading |
FORMAT | none | 'CSV' uses RFC 4180 rules (2017+) |
FORMATFILE | none | Format file path |
KEEPIDENTITY | OFF | Use identity values from file |
KEEPNULLS | OFF | Preserve NULLs instead of using column defaults |
LASTROW | 0 (all) | Stop at this row number |
MAXERRORS | 10 | Abort after N errors |
ORDER | none | (col ASC/DESC) for minimal logging optimization |
ROWTERMINATOR | '\n' | Row delimiter |
ROWS_PER_BATCH | auto | Hint to optimizer for load batching |
TABLOCK | OFF | Table-level lock; required for minimal logging |
[!NOTE] SQL Server 2017
FORMAT = 'CSV'andFIELDQUOTEoptions added, enabling RFC 4180 compliant CSV parsing including quoted fields with embedded commas.
Allows reading a file as a table in a query — useful for ad-hoc inspection or INSERT…SELECT patterns. The file path is resolved by the SQL Server engine.
-- Ad-hoc CSV read
SELECT *
FROM OPENROWSET(
BULK 'C:\data\employees.csv',
FORMATFILE = 'C:\data\employee.xml',
FIRSTROW = 2
) AS src;
-- Single-column SINGLE_CLOB for reading text files
SELECT BulkColumn
FROM OPENROWSET(BULK 'C:\scripts\myscript.sql', SINGLE_CLOB) AS t;
-- Insert using OPENROWSET BULK
INSERT INTO dbo.StagingTable
SELECT *
FROM OPENROWSET(
BULK 'C:\data\employees.dat',
FORMATFILE = 'C:\data\employee.fmt'
) AS src;
-- S3-compatible (SQL Server 2022+)
SELECT *
FROM OPENROWSET(
BULK 's3://mybucket/data/employees.csv',
CREDENTIAL = 'MyS3Credential',
FORMAT = 'CSV',
FIRSTROW = 2
) WITH (
EmployeeID INT,
LastName NVARCHAR(50),
Department NVARCHAR(100)
) AS src;| Option | Type returned | Use case |
|---|---|---|
SINGLE_CLOB | VARCHAR(MAX) | Read entire text file as one value |
SINGLE_NCLOB | NVARCHAR(MAX) | Read entire unicode file |
SINGLE_BLOB | VARBINARY(MAX) | Read binary file (images, etc.) |
[!WARNING]
OPENROWSETrequiresAd Hoc Distributed Queriesto be enabled:EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'Ad Hoc Distributed Queries', 1; RECONFIGURE;This is disabled by default for security. Enable only when needed; prefer BULK INSERT for automated loads.
Minimal logging dramatically reduces transaction log growth during bulk loads. All conditions must be met:
TABLOCK hint specifiedFIRE_TRIGGERS not specified)TABLOCK hint specifiedORDER hint matches clustered key)CHECK_CONSTRAINTS on (FK/CHECK validation fully logged)FIRE_TRIGGERS on-- Check transaction log usage before/after to compare
SELECT log_reuse_wait_desc, log_size_mb = log_size_mb,
log_used_mb = log_used_mb
FROM sys.dm_db_log_stats(DB_ID());
-- During load: look for BULK_OP_ALLOC wait type (indicates minimal logging)
SELECT wait_type, waiting_tasks_count, wait_time_ms
FROM sys.dm_os_wait_stats
WHERE wait_type LIKE 'BULK%';[!WARNING] Recovery model switch Switching from FULL to BULK_LOGGED breaks the log backup chain. After the load, take a log backup immediately before switching back, then take another log backup. Without this you will not be able to do point-in-time restore across the switch boundary.
sqlpackage is the CLI wrapper around DacFx — the same engine SSMS uses for schema compare and deployment.
| Action | Description |
|---|---|
Publish | Deploy a dacpac to a database (schema + optionally data) |
Extract | Extract a dacpac from a live database |
Export | Export a bacpac from a live database (schema + data) |
Import | Import a bacpac into a new/existing database |
DeployReport | XML report of what Publish would change |
DriftReport | XML report of schema drift since last publish |
Script | Generate T-SQL deployment script without executing |
sqlpackage /Action:Extract \
/SourceServerName:myserver \
/SourceDatabaseName:MyDatabase \
/SourceUser:sa \
/SourcePassword:pass \
/TargetFile:MyDatabase.dacpacsqlpackage /Action:Publish \
/SourceFile:MyDatabase.dacpac \
/TargetServerName:targetserver \
/TargetDatabaseName:MyDatabase \
/TargetUser:sa \
/TargetPassword:pass \
/p:BlockOnPossibleDataLoss=true \
/p:DropObjectsNotInSource=false \
/p:IgnorePermissions=truesqlpackage /Action:Script \
/SourceFile:MyDatabase.dacpac \
/TargetServerName:targetserver \
/TargetDatabaseName:MyDatabase \
/TargetUser:sa \
/TargetPassword:pass \
/OutputPath:deploy.sqlsqlpackage /Action:DeployReport \
/SourceFile:MyDatabase.dacpac \
/TargetServerName:targetserver \
/TargetDatabaseName:MyDatabase \
/TargetUser:sa \
/TargetPassword:pass \
/OutputPath:report.xml# Export schema + data
sqlpackage /Action:Export \
/SourceServerName:myserver \
/SourceDatabaseName:MyDatabase \
/TargetFile:MyDatabase.bacpac \
/SourceUser:sa /SourcePassword:pass
# Import to new database
sqlpackage /Action:Import \
/SourceFile:MyDatabase.bacpac \
/TargetServerName:targetserver \
/TargetDatabaseName:MyDatabaseCopy \
/TargetUser:sa /TargetPassword:pass# .github/workflows/deploy-db.yml
name: Deploy Database
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install sqlpackage
run: |
curl -sSL https://aka.ms/sqlpackage-linux -o sqlpackage.zip
unzip sqlpackage.zip -d ~/sqlpackage
chmod +x ~/sqlpackage/sqlpackage
- name: Deploy Report (what will change)
run: |
~/sqlpackage/sqlpackage /Action:DeployReport \
/SourceFile:./database/MyDatabase.dacpac \
/TargetServerName:${{ secrets.DB_SERVER }} \
/TargetDatabaseName:MyDatabase \
/TargetUser:${{ secrets.DB_USER }} \
/TargetPassword:${{ secrets.DB_PASS }} \
/OutputPath:/tmp/deploy-report.xml
- name: Deploy dacpac
run: |
~/sqlpackage/sqlpackage /Action:Publish \
/SourceFile:./database/MyDatabase.dacpac \
/TargetServerName:${{ secrets.DB_SERVER }} \
/TargetDatabaseName:MyDatabase \
/TargetUser:${{ secrets.DB_USER }} \
/TargetPassword:${{ secrets.DB_PASS }} \
/p:BlockOnPossibleDataLoss=true \
/p:DropObjectsNotInSource=false \
/p:IgnorePermissions=true \
/p:ExcludeObjectTypes=Logins/p:)| Property | Default | Notes |
|---|---|---|
BlockOnPossibleDataLoss | true | Abort if operation could lose data |
DropObjectsNotInSource | false | Drop objects in target not in dacpac |
IgnorePermissions | false | Don't deploy GRANT/REVOKE statements |
IgnoreRoleMembership | false | Skip role membership changes |
ExcludeObjectTypes | none | Comma-separated list of types to skip |
CreateNewDatabase | false | Create DB if it doesn't exist |
BackupDatabaseBeforeChanges | false | Take backup before deploying |
ScriptDatabaseCompatibility | true | Include compat level in deploy |
IncludeCompositeObjects | false | Include referenced objects in same server |
mssql-cli is an interactive terminal client with syntax highlighting, multi-line editing, and auto-completion. Install via pip:
pip install mssql-cli
# Connect
mssql-cli -S myserver -U sa -P pass -d MyDatabase
# Windows integrated auth
mssql-cli -S myserver -E -d MyDatabaseF3 / \e — Toggle multi-line modeF4 — Toggle syntax highlighting\l — List databases\dn — List schemas\dt — List tables\d tablename — Describe table\q or quit — Exit~/.config/mssql-cli/config)[main]
multi_line = True
style = solarized
row_limit = 1000
less_chatty = True[!NOTE]
mssql-cliis currently in maintenance mode. For production scripting, usesqlcmd. For interactive development, it remains useful for syntax highlighting and autocomplete.
# Install
Install-Module -Name SqlServer -AllowClobber -Force
# Import
Import-Module SqlServer# Run a query
Invoke-Sqlcmd -ServerInstance "myserver" `
-Database "MyDB" `
-Query "SELECT TOP 10 name FROM sys.tables" `
-Username "sa" `
-Password "pass"
# From file with variables
Invoke-Sqlcmd -ServerInstance "myserver" `
-Database "MyDB" `
-InputFile "C:\scripts\deploy.sql" `
-Variable @("Env=Prod", "MaxRows=1000") `
-TrustServerCertificate
# Capture output to CSV
Invoke-Sqlcmd -ServerInstance "myserver" `
-Database "MyDB" `
-Query "SELECT * FROM dbo.Orders" |
Export-Csv -Path "orders.csv" -NoTypeInformation# Full backup
Backup-SqlDatabase -ServerInstance "myserver" `
-Database "MyDB" `
-BackupFile "C:\backups\MyDB_full.bak" `
-CompressionOption On
# Differential backup
Backup-SqlDatabase -ServerInstance "myserver" `
-Database "MyDB" `
-BackupFile "C:\backups\MyDB_diff.bak" `
-BackupAction Differential `
-CompressionOption On
# Log backup
Backup-SqlDatabase -ServerInstance "myserver" `
-Database "MyDB" `
-BackupFile "C:\backups\MyDB_log.bak" `
-BackupAction Log `
-CompressionOption On
# Restore
Restore-SqlDatabase -ServerInstance "myserver" `
-Database "MyDB_Restored" `
-BackupFile "C:\backups\MyDB_full.bak" `
-ReplaceDatabase `
-NoRecovery# List all databases
Get-SqlDatabase -ServerInstance "myserver"
# Get index fragmentation
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server("myserver")
$db = $srv.Databases["MyDB"]
$db.Tables | ForEach-Object {
$_.Indexes | ForEach-Object {
[PSCustomObject]@{
Table = $_.Parent.Name
Index = $_.Name
AverageFragmentation = $_.EnumFragmentation() |
Select-Object -ExpandProperty AverageFragmentation
}
}
}| Dimension | BULK INSERT | bcp |
|---|---|---|
| Execution context | T-SQL (inside SQL Server) | Command line (client machine) |
| File location | SQL Server machine or UNC share | Client machine or UNC share |
| Authentication | Current SQL connection | -T / -U -P |
| S3/Azure Blob | Yes (2022+) | No |
| Format files | Yes | Yes (generates and uses) |
| Column header skip | FIRSTROW = 2 | -F 2 |
| Minimal logging | Yes (with correct conditions) | Yes (with correct conditions) |
| Error file | ERRORFILE = '...' | -e errorfile |
| Use in scripts | T-SQL only | Any shell/CI environment |
| SSPI Kerberos | Inherited from connection | -T |
| Transform during load | No (use OPENROWSET + SELECT) | No |
| Network load | Possible if UNC accessible | Client-side; can be faster |
| Automation | Via SQL Agent job | Shell script / cron |
Rule of thumb:
bcp when the file is on the client machine and you need to automate from a shellBULK INSERT when the file is on the server or accessible via UNC and you want T-SQL controlOPENROWSET BULK for one-off SELECT-based inspection or INSERT…SELECT transformsbcp native mode is not portable. Native (-n) files are tied to the source SQL Server version. Do not use native mode for cross-version migrations; use character mode or XML format files.
bcp does not write column headers. Export headers separately with sqlcmd or prepend manually. Importing a file with headers requires FIRSTROW = 2 or -F 2 to skip.
BULK INSERT file path is server-side. The path in BULK INSERT ... FROM 'C:\...' must be accessible to the SQL Server service account, not the application server. Use UNC shares for cross-machine loads.
Minimal logging is all-or-nothing per load. If even one condition is unmet (e.g., an NCI exists), the entire load reverts to full logging. Use sys.dm_db_log_stats to verify before production runs.
sqlcmd exit codes. Without -b (abort on error), sqlcmd exits 0 even after T-SQL errors. Always add -b -V 16 in CI/CD pipelines to detect failures.
sqlpackage BlockOnPossibleDataLoss default is true. This will abort if a column is being narrowed, a type is changing, or a table is being dropped. Set to false only with explicit DBA review — it exists to prevent accidents.
sqlpackage drops triggers and statistics as "noise". By default it deploys statistics objects, which causes needless churn in CI. Add /p:IgnoreStatistics=true and /p:DoNotDropObjectTypes=Statistics in most CI pipelines.
OPENROWSET BULK requires Ad Hoc Distributed Queries. This is a server-wide setting with security implications. Prefer BULK INSERT from a scheduled job over keeping Ad Hoc Distributed Queries permanently enabled.
bcp format file column numbers are 1-based and the TableColOrder of 0 means "skip this field". Confusion between these two columns causes silent data misalignment.
Character mode bcp and DATETIME format. bcp -c exports dates as locale-dependent strings. Use a format file with SQLDATETIME or use -q (quoted identifiers) and ensure SET DATEFORMAT matches on import. Prefer ISO 8601 format (yyyy-MM-dd) in character files.
sqlpackage on Linux needs --version check. The Linux build occasionally lags the Windows build on feature support. Verify version with sqlpackage /version before relying on newer publish properties.
mssql-cli Python dependency conflicts. mssql-cli requires Python 3.6–3.9 on some platforms. Use a virtual environment to isolate it from system Python packages.
references/44-backup-restore.md — BACKUP/RESTORE T-SQL referencereferences/46-polybase-external-tables.md — S3/Azure Blob external data sourcesreferences/36-data-compression.md — Compression and bulk load interactionreferences/50-sql-server-agent.md — Scheduling bulk jobs via SQL Agentreferences