Skip to content
IR-011cloudv1.0.0

AI/ML Model Abuse and Training Data Poisoning

⚠️ medium⚠️ high⚠️ critical
Est. Time60m
📋 Steps10 steps
🔧 Tools6 required
🔗 Integrations5 platforms
📊 Avg Resolution90m
★ View on GitHub

🔧 Tools Required

siemcloud consoleapi gatewaydlp platformml platform logsthreat intelligence platform

⚡ Triggers

ml_api_rate_limit_breachinference_cost_spike_alertmodel_endpoint_anomalytraining_pipeline_unauthorized_accesssiem_ml_api_abusedlp_training_data_exfiltration

🔌 Integrations

opt

openai api

Usage dashboard and API key management for OpenAI-hosted models

opt

aws sagemaker

CloudWatch metrics for SageMaker endpoints — inference anomaly detection

opt

azure ml

Azure Monitor for ML workspace activity and model registry changes

opt

splunk

Ingest model API logs for SIEM correlation and anomaly queries

opt

datadog

APM for model serving infrastructure and cost anomaly detection

Click each step to expand the full procedure, automation hints, and expected outputs.

Identify which model endpoint, API key, or service account is at the center of the anomaly. Review the triggering alert for details: API key ID, model name, endpoint URL, and the nature of the abuse (rate limit breach, unusual request payloads, cost spike). Determine whether the caller is an internal service, external application, or unknown third party. Check if the key is associated with a human developer or a machine identity (CI/CD pipeline, service account).

⚡ Automation Hint

OpenAI: GET https://api.openai.com/v1/usage — filter by date and API key. AWS SageMaker: aws cloudwatch get-metric-statistics --namespace AWS/SageMaker --metric-name InvocationsPerInstance --dimensions Name=EndpointName,Value=<endpoint> GCP Vertex AI: gcloud logging read 'resource.type="aiplatform.googleapis.com/Endpoint"'

📤 Outputs

abused_api_key_idmodel_endpoint_namecalling_principalabuse_type