Skip to content
COST09

COST09-BP03 - Supply resources dynamically

Implementation guidance

Dynamic resource supply involves implementing automated systems that can provision, scale, and de-provision resources based on real-time demand and predefined policies. This approach minimizes waste by ensuring resources are available when needed and removed when not required.

Dynamic Supply Strategies

Auto-Scaling: Automatically adjust the number of compute instances, containers, or other resources based on demand metrics and policies.

Serverless Computing: Use serverless architectures that automatically scale from zero to handle any level of demand without pre-provisioning resources.

Elastic Storage: Implement storage solutions that automatically expand and contract based on data volume and access patterns.

Just-in-Time Provisioning: Provision resources only when needed and de-provision them immediately after use to minimize costs.

Predictive Scaling: Use machine learning and historical data to predict demand and pre-scale resources proactively.

Implementation Patterns

Reactive Scaling: Scale resources in response to real-time metrics like CPU utilization, queue depth, or request rate.

Scheduled Scaling: Scale resources based on known patterns and scheduled events to optimize for predictable demand.

Multi-Dimensional Scaling: Scale different resource types (compute, memory, storage) independently based on specific utilization metrics.

Cross-Service Scaling: Coordinate scaling across multiple services and tiers to maintain optimal performance and cost efficiency.

AWS Services to Consider

AWS Auto Scaling

Automatically scale multiple AWS resources across services. Use Auto Scaling to implement comprehensive dynamic resource management across your entire application stack.

Amazon EC2 Auto Scaling

Automatically scale EC2 instances based on demand. Use EC2 Auto Scaling for compute resource optimization with predictive and reactive scaling capabilities.

AWS Lambda

Implement serverless computing that scales automatically from zero. Use Lambda for event-driven workloads that require instant scaling without resource management.

Amazon ECS/EKS Auto Scaling

Scale containerized applications automatically. Use container auto-scaling for microservices architectures with fine-grained resource control.

Amazon DynamoDB Auto Scaling

Automatically adjust DynamoDB capacity based on traffic patterns. Use DynamoDB auto-scaling to optimize database costs while maintaining performance.

Amazon Aurora Serverless

Use serverless database that automatically scales capacity. Implement Aurora Serverless for variable database workloads with automatic scaling.

Implementation Steps

1. Define Scaling Policies

  • Establish scaling triggers and thresholds based on demand analysis
  • Define scaling policies for different resource types and services
  • Set up scaling boundaries and safety limits
  • Create policies for both scale-up and scale-down scenarios

2. Implement Auto-Scaling Infrastructure

  • Deploy auto-scaling groups and policies for compute resources
  • Configure database auto-scaling for storage and throughput
  • Set up container orchestration with auto-scaling capabilities
  • Implement serverless architectures where appropriate

3. Configure Monitoring and Metrics

  • Set up comprehensive monitoring for scaling triggers
  • Configure custom metrics for application-specific scaling
  • Implement health checks and performance monitoring
  • Create dashboards for scaling activity visibility

4. Test Scaling Behavior

  • Test scaling policies under various load conditions
  • Validate scaling performance and timing
  • Ensure scaling doesn’t impact application availability
  • Test scale-down behavior and resource cleanup

5. Optimize Scaling Parameters

  • Fine-tune scaling thresholds and timing parameters
  • Optimize for cost efficiency while maintaining performance
  • Implement predictive scaling where beneficial
  • Continuously monitor and adjust scaling behavior

6. Implement Advanced Scaling Features

  • Deploy predictive scaling using machine learning
  • Implement multi-dimensional scaling strategies
  • Set up cross-service scaling coordination
  • Create custom scaling solutions for specific requirements

Dynamic Resource Supply Framework

Dynamic Resource Manager

View code
import boto3
import json
import numpy as np
from datetime import datetime, timedelta
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple
from enum import Enum
import threading
import time

class ScalingStrategy(Enum):
    REACTIVE = "reactive"
    PREDICTIVE = "predictive"
    SCHEDULED = "scheduled"
    HYBRID = "hybrid"

class ResourceType(Enum):
    COMPUTE = "compute"
    STORAGE = "storage"
    DATABASE = "database"
    CONTAINER = "container"
    SERVERLESS = "serverless"

@dataclass
class ScalingPolicy:
    resource_type: ResourceType
    strategy: ScalingStrategy
    scale_up_threshold: float
    scale_down_threshold: float
    scale_up_adjustment: int
    scale_down_adjustment: int
    cooldown_period: int
    min_capacity: int
    max_capacity: int

@dataclass
class ResourceMetrics:
    timestamp: datetime
    resource_id: str
    cpu_utilization: float
    memory_utilization: float
    request_count: float
    response_time: float
    queue_depth: int

class DynamicResourceManager:
    def __init__(self):
        self.autoscaling = boto3.client('autoscaling')
        self.ecs = boto3.client('ecs')
        self.lambda_client = boto3.client('lambda')
        self.dynamodb = boto3.client('dynamodb')
        self.cloudwatch = boto3.client('cloudwatch')
        self.application_autoscaling = boto3.client('application-autoscaling')
        
        # Scaling state management
        self.scaling_activities = {}
        self.resource_states = {}
        self.scaling_history = []
        
    def implement_ec2_auto_scaling(self, scaling_config: Dict) -> Dict:
        """Implement EC2 Auto Scaling configuration"""
        
        auto_scaling_config = {
            'auto_scaling_group_name': scaling_config['asg_name'],
            'launch_template': {
                'launch_template_name': scaling_config['launch_template_name'],
                'version': scaling_config.get('template_version', '$Latest')
            },
            'min_size': scaling_config.get('min_size', 1),
            'max_size': scaling_config.get('max_size', 10),
            'desired_capacity': scaling_config.get('desired_capacity', 2),
            'vpc_zone_identifier': scaling_config['subnet_ids'],
            'health_check_type': scaling_config.get('health_check_type', 'EC2'),
            'health_check_grace_period': scaling_config.get('health_check_grace_period', 300),
            'default_cooldown': scaling_config.get('default_cooldown', 300),
            'scaling_policies': []
        }
        
        # Create scale-up policy
        scale_up_policy = {
            'policy_name': f"{scaling_config['asg_name']}-scale-up",
            'policy_type': 'TargetTrackingScaling',
            'target_tracking_configuration': {
                'target_value': scaling_config.get('target_cpu_utilization', 70.0),
                'predefined_metric_specification': {
                    'predefined_metric_type': 'ASGAverageCPUUtilization'
                },
                'scale_out_cooldown': scaling_config.get('scale_out_cooldown', 300),
                'scale_in_cooldown': scaling_config.get('scale_in_cooldown', 300)
            }
        }
        auto_scaling_config['scaling_policies'].append(scale_up_policy)
        
        # Add predictive scaling if enabled
        if scaling_config.get('enable_predictive_scaling', False):
            predictive_policy = {
                'policy_name': f"{scaling_config['asg_name']}-predictive",
                'policy_type': 'PredictiveScaling',
                'predictive_scaling_configuration': {
                    'metric_specifications': [
                        {
                            'target_value': scaling_config.get('target_cpu_utilization', 70.0),
                            'predefined_metric_specification': {
                                'predefined_metric_type': 'ASGAverageCPUUtilization'
                            }
                        }
                    ],
                    'mode': scaling_config.get('predictive_mode', 'ForecastAndScale'),
                    'scheduling_buffer_time': scaling_config.get('scheduling_buffer_time', 300)
                }
            }
            auto_scaling_config['scaling_policies'].append(predictive_policy)
        
        return auto_scaling_config
    
    def implement_serverless_scaling(self, serverless_config: Dict) -> Dict:
        """Implement serverless scaling configuration"""
        
        serverless_scaling_config = {
            'lambda_functions': [],
            'aurora_serverless': [],
            'fargate_services': []
        }
        
        # Lambda function scaling configuration
        for lambda_config in serverless_config.get('lambda_functions', []):
            lambda_scaling = {
                'function_name': lambda_config['function_name'],
                'reserved_concurrency': lambda_config.get('reserved_concurrency'),
                'provisioned_concurrency': lambda_config.get('provisioned_concurrency'),
                'auto_scaling': {
                    'enabled': lambda_config.get('enable_auto_scaling', True),
                    'target_utilization': lambda_config.get('target_utilization', 0.7),
                    'min_capacity': lambda_config.get('min_capacity', 0),
                    'max_capacity': lambda_config.get('max_capacity', 1000)
                },
                'cost_optimization': {
                    'memory_optimization': lambda_config.get('optimize_memory', True),
                    'timeout_optimization': lambda_config.get('optimize_timeout', True),
                    'dead_letter_queue': lambda_config.get('enable_dlq', True)
                }
            }
            serverless_scaling_config['lambda_functions'].append(lambda_scaling)
        
        # Aurora Serverless configuration
        for aurora_config in serverless_config.get('aurora_serverless', []):
            aurora_scaling = {
                'cluster_identifier': aurora_config['cluster_identifier'],
                'engine': aurora_config.get('engine', 'aurora-mysql'),
                'scaling_configuration': {
                    'min_capacity': aurora_config.get('min_capacity', 1),
                    'max_capacity': aurora_config.get('max_capacity', 16),
                    'auto_pause': aurora_config.get('auto_pause', True),
                    'seconds_until_auto_pause': aurora_config.get('auto_pause_delay', 300),
                    'timeout_action': aurora_config.get('timeout_action', 'ForceApplyCapacityChange')
                }
            }
            serverless_scaling_config['aurora_serverless'].append(aurora_scaling)
        
        return serverless_scaling_config
    
    def implement_container_scaling(self, container_config: Dict) -> Dict:
        """Implement container auto-scaling configuration"""
        
        container_scaling_config = {
            'ecs_services': [],
            'eks_deployments': []
        }
        
        # ECS Service scaling
        for ecs_config in container_config.get('ecs_services', []):
            ecs_scaling = {
                'service_name': ecs_config['service_name'],
                'cluster_name': ecs_config['cluster_name'],
                'scalable_target': {
                    'min_capacity': ecs_config.get('min_capacity', 1),
                    'max_capacity': ecs_config.get('max_capacity', 10),
                    'resource_id': f"service/{ecs_config['cluster_name']}/{ecs_config['service_name']}",
                    'scalable_dimension': 'ecs:service:DesiredCount',
                    'service_namespace': 'ecs'
                },
                'scaling_policies': [
                    {
                        'policy_name': f"{ecs_config['service_name']}-cpu-scaling",
                        'policy_type': 'TargetTrackingScaling',
                        'target_tracking_configuration': {
                            'target_value': ecs_config.get('target_cpu_utilization', 70.0),
                            'predefined_metric_specification': {
                                'predefined_metric_type': 'ECSServiceAverageCPUUtilization'
                            },
                            'scale_out_cooldown': ecs_config.get('scale_out_cooldown', 300),
                            'scale_in_cooldown': ecs_config.get('scale_in_cooldown', 300)
                        }
                    },
                    {
                        'policy_name': f"{ecs_config['service_name']}-memory-scaling",
                        'policy_type': 'TargetTrackingScaling',
                        'target_tracking_configuration': {
                            'target_value': ecs_config.get('target_memory_utilization', 80.0),
                            'predefined_metric_specification': {
                                'predefined_metric_type': 'ECSServiceAverageMemoryUtilization'
                            }
                        }
                    }
                ]
            }
            container_scaling_config['ecs_services'].append(ecs_scaling)
        
        return container_scaling_config
    
    def implement_database_scaling(self, database_config: Dict) -> Dict:
        """Implement database auto-scaling configuration"""
        
        database_scaling_config = {
            'dynamodb_tables': [],
            'rds_instances': [],
            'aurora_clusters': []
        }
        
        # DynamoDB auto-scaling
        for dynamodb_config in database_config.get('dynamodb_tables', []):
            dynamodb_scaling = {
                'table_name': dynamodb_config['table_name'],
                'read_capacity_scaling': {
                    'min_capacity': dynamodb_config.get('min_read_capacity', 5),
                    'max_capacity': dynamodb_config.get('max_read_capacity', 4000),
                    'target_utilization': dynamodb_config.get('read_target_utilization', 70.0),
                    'scale_in_cooldown': dynamodb_config.get('read_scale_in_cooldown', 60),
                    'scale_out_cooldown': dynamodb_config.get('read_scale_out_cooldown', 60)
                },
                'write_capacity_scaling': {
                    'min_capacity': dynamodb_config.get('min_write_capacity', 5),
                    'max_capacity': dynamodb_config.get('max_write_capacity', 4000),
                    'target_utilization': dynamodb_config.get('write_target_utilization', 70.0),
                    'scale_in_cooldown': dynamodb_config.get('write_scale_in_cooldown', 60),
                    'scale_out_cooldown': dynamodb_config.get('write_scale_out_cooldown', 60)
                },
                'global_secondary_indexes': []
            }
            
            # Configure GSI scaling
            for gsi_config in dynamodb_config.get('global_secondary_indexes', []):
                gsi_scaling = {
                    'index_name': gsi_config['index_name'],
                    'read_capacity_scaling': {
                        'min_capacity': gsi_config.get('min_read_capacity', 5),
                        'max_capacity': gsi_config.get('max_read_capacity', 4000),
                        'target_utilization': gsi_config.get('read_target_utilization', 70.0)
                    },
                    'write_capacity_scaling': {
                        'min_capacity': gsi_config.get('min_write_capacity', 5),
                        'max_capacity': gsi_config.get('max_write_capacity', 4000),
                        'target_utilization': gsi_config.get('write_target_utilization', 70.0)
                    }
                }
                dynamodb_scaling['global_secondary_indexes'].append(gsi_scaling)
            
            database_scaling_config['dynamodb_tables'].append(dynamodb_scaling)
        
        return database_scaling_config
    
    def implement_predictive_scaling(self, predictive_config: Dict) -> Dict:
        """Implement predictive scaling using machine learning"""
        
        predictive_scaling_config = {
            'prediction_model': {
                'algorithm': predictive_config.get('algorithm', 'linear_regression'),
                'training_period_days': predictive_config.get('training_period', 14),
                'prediction_horizon_hours': predictive_config.get('prediction_horizon', 24),
                'confidence_threshold': predictive_config.get('confidence_threshold', 0.8)
            },
            'scaling_actions': {
                'pre_scale_buffer_minutes': predictive_config.get('pre_scale_buffer', 15),
                'scale_up_confidence_threshold': predictive_config.get('scale_up_confidence', 0.7),
                'scale_down_confidence_threshold': predictive_config.get('scale_down_confidence', 0.9),
                'max_predictive_scaling_percentage': predictive_config.get('max_predictive_scaling', 50)
            },
            'fallback_strategy': {
                'enable_reactive_fallback': predictive_config.get('enable_fallback', True),
                'fallback_threshold_minutes': predictive_config.get('fallback_threshold', 5),
                'fallback_scaling_policy': predictive_config.get('fallback_policy', 'reactive')
            }
        }
        
        return predictive_scaling_config
    
    def create_scaling_orchestrator(self, orchestration_config: Dict) -> Dict:
        """Create orchestrated scaling across multiple services"""
        
        orchestrator_config = {
            'orchestration_name': orchestration_config['name'],
            'scaling_groups': [],
            'dependencies': [],
            'coordination_strategy': orchestration_config.get('strategy', 'sequential'),
            'health_checks': [],
            'rollback_strategy': {}
        }
        
        # Define scaling groups
        for group_config in orchestration_config.get('scaling_groups', []):
            scaling_group = {
                'group_name': group_config['name'],
                'resources': group_config['resources'],
                'scaling_order': group_config.get('order', 1),
                'parallel_scaling': group_config.get('parallel', False),
                'health_check_required': group_config.get('health_check', True),
                'rollback_on_failure': group_config.get('rollback', True)
            }
            orchestrator_config['scaling_groups'].append(scaling_group)
        
        # Define dependencies
        for dependency in orchestration_config.get('dependencies', []):
            dep_config = {
                'source_group': dependency['source'],
                'target_group': dependency['target'],
                'dependency_type': dependency.get('type', 'scale_before'),
                'wait_condition': dependency.get('wait_condition', 'healthy'),
                'timeout_minutes': dependency.get('timeout', 10)
            }
            orchestrator_config['dependencies'].append(dep_config)
        
        return orchestrator_config
    
    def monitor_scaling_performance(self, monitoring_config: Dict) -> Dict:
        """Monitor and analyze scaling performance"""
        
        performance_metrics = {
            'scaling_efficiency': self.calculate_scaling_efficiency(),
            'cost_optimization': self.calculate_cost_optimization_metrics(),
            'performance_impact': self.analyze_performance_impact(),
            'scaling_accuracy': self.calculate_scaling_accuracy(),
            'resource_utilization': self.analyze_resource_utilization()
        }
        
        return performance_metrics
    
    def calculate_scaling_efficiency(self) -> Dict:
        """Calculate scaling efficiency metrics"""
        
        # This would analyze actual scaling events and their effectiveness
        return {
            'average_scale_up_time': 180,  # seconds
            'average_scale_down_time': 300,  # seconds
            'scaling_accuracy_percentage': 85,
            'over_scaling_events': 12,
            'under_scaling_events': 8,
            'optimal_scaling_events': 180
        }
    
    def create_dynamic_scaling_dashboard(self) -> Dict:
        """Create comprehensive dynamic scaling dashboard"""
        
        dashboard_config = {
            'dashboard_name': 'Dynamic Resource Scaling',
            'widgets': [
                {
                    'type': 'metric',
                    'title': 'Auto Scaling Group Activity',
                    'metrics': [
                        ['AWS/AutoScaling', 'GroupDesiredCapacity', 'AutoScalingGroupName', 'web-asg'],
                        ['AWS/AutoScaling', 'GroupInServiceInstances', 'AutoScalingGroupName', 'web-asg'],
                        ['AWS/AutoScaling', 'GroupTotalInstances', 'AutoScalingGroupName', 'web-asg']
                    ],
                    'period': 300
                },
                {
                    'type': 'metric',
                    'title': 'Lambda Concurrency and Scaling',
                    'metrics': [
                        ['AWS/Lambda', 'ConcurrentExecutions', 'FunctionName', 'api-function'],
                        ['AWS/Lambda', 'ProvisionedConcurrencyUtilization', 'FunctionName', 'api-function'],
                        ['AWS/Lambda', 'Duration', 'FunctionName', 'api-function']
                    ],
                    'period': 300
                },
                {
                    'type': 'metric',
                    'title': 'DynamoDB Auto Scaling',
                    'metrics': [
                        ['AWS/DynamoDB', 'ConsumedReadCapacityUnits', 'TableName', 'user-table'],
                        ['AWS/DynamoDB', 'ConsumedWriteCapacityUnits', 'TableName', 'user-table'],
                        ['AWS/DynamoDB', 'ReadThrottledEvents', 'TableName', 'user-table'],
                        ['AWS/DynamoDB', 'WriteThrottledEvents', 'TableName', 'user-table']
                    ],
                    'period': 300
                },
                {
                    'type': 'metric',
                    'title': 'ECS Service Scaling',
                    'metrics': [
                        ['AWS/ECS', 'ServiceRunningTaskCount', 'ServiceName', 'api-service', 'ClusterName', 'production'],
                        ['AWS/ECS', 'ServiceDesiredTaskCount', 'ServiceName', 'api-service', 'ClusterName', 'production'],
                        ['AWS/ECS', 'CPUUtilization', 'ServiceName', 'api-service', 'ClusterName', 'production'],
                        ['AWS/ECS', 'MemoryUtilization', 'ServiceName', 'api-service', 'ClusterName', 'production']
                    ],
                    'period': 300
                }
            ]
        }
        
        return dashboard_config

Dynamic Scaling Templates

Multi-Service Scaling Configuration

View code
Multi_Service_Scaling_Configuration:
  configuration_name: "e-commerce-platform-scaling"
  objective: "Implement coordinated scaling across all application tiers"
  
  scaling_groups:
    web_tier:
      resources:
        - type: "EC2_AUTO_SCALING_GROUP"
          name: "web-servers-asg"
          min_capacity: 2
          max_capacity: 20
          target_cpu_utilization: 70
          
      scaling_policies:
        - type: "target_tracking"
          metric: "CPUUtilization"
          target_value: 70.0
          cooldown: 300
          
        - type: "predictive"
          enabled: true
          scheduling_buffer: 300
          
    application_tier:
      resources:
        - type: "ECS_SERVICE"
          name: "api-service"
          cluster: "production"
          min_capacity: 3
          max_capacity: 30
          target_cpu_utilization: 75
          target_memory_utilization: 80
          
    database_tier:
      resources:
        - type: "DYNAMODB_TABLE"
          name: "user-table"
          read_capacity:
            min: 5
            max: 4000
            target_utilization: 70
          write_capacity:
            min: 5
            max: 4000
            target_utilization: 70
            
        - type: "AURORA_SERVERLESS"
          name: "analytics-cluster"
          min_capacity: 1
          max_capacity: 16
          auto_pause: true
          auto_pause_delay: 300
          
  orchestration:
    strategy: "coordinated"
    scaling_order:
      - "database_tier"  # Scale database first
      - "application_tier"  # Then application layer
      - "web_tier"  # Finally web layer
      
    health_checks:
      - resource: "database_tier"
        check_type: "connection_test"
        timeout: 60
        
      - resource: "application_tier"
        check_type: "health_endpoint"
        endpoint: "/health"
        timeout: 30
        
  monitoring:
    scaling_metrics:
      - "scaling_events_per_hour"
      - "average_scaling_duration"
      - "scaling_accuracy_percentage"
      - "cost_optimization_achieved"
      
    alerts:
      - name: "FrequentScaling"
        condition: "scaling_events > 10 per hour"
        action: "investigate_scaling_policies"
        
      - name: "SlowScaling"
        condition: "average_scaling_duration > 600 seconds"
        action: "optimize_scaling_configuration"
        
  cost_optimization:
    estimated_monthly_savings: 2400.00
    savings_breakdown:
      over_provisioning_reduction: 1200.00
      improved_utilization: 800.00
      serverless_adoption: 400.00

Predictive Scaling Implementation

View code
def create_predictive_scaling_implementation():
    """Create comprehensive predictive scaling implementation"""
    
    implementation = {
        'data_collection': {
            'metrics_sources': [
                'CloudWatch metrics',
                'Application logs',
                'Business metrics',
                'External data sources'
            ],
            'collection_frequency': '1 minute',
            'retention_period': '90 days',
            'data_preprocessing': {
                'normalization': True,
                'outlier_detection': True,
                'missing_data_handling': 'interpolation'
            }
        },
        
        'prediction_models': {
            'time_series_forecasting': {
                'algorithm': 'ARIMA',
                'seasonality_detection': True,
                'trend_analysis': True,
                'confidence_intervals': True
            },
            'machine_learning': {
                'algorithm': 'Random Forest',
                'feature_engineering': [
                    'time_of_day',
                    'day_of_week',
                    'month_of_year',
                    'business_events',
                    'weather_data'
                ],
                'model_retraining': 'weekly'
            },
            'ensemble_methods': {
                'combine_predictions': True,
                'weighting_strategy': 'accuracy_based',
                'fallback_model': 'simple_moving_average'
            }
        },
        
        'scaling_decisions': {
            'prediction_horizon': '2 hours',
            'confidence_threshold': 0.8,
            'scaling_buffer': '15 minutes',
            'maximum_predictive_scaling': '50%',
            'validation_checks': [
                'resource_availability',
                'cost_constraints',
                'business_rules'
            ]
        },
        
        'implementation_strategy': {
            'gradual_rollout': {
                'phase_1': 'Monitor predictions without scaling',
                'phase_2': 'Limited predictive scaling (25%)',
                'phase_3': 'Full predictive scaling with fallback',
                'phase_4': 'Optimized predictive scaling'
            },
            'risk_mitigation': {
                'reactive_fallback': True,
                'maximum_scale_out': '200%',
                'minimum_scale_in': '50%',
                'circuit_breaker': True
            }
        }
    }
    
    return implementation

Common Challenges and Solutions

Challenge: Scaling Latency and Timing

Solution: Implement predictive scaling to pre-provision resources. Use warm pools and pre-scaled capacity for faster scaling. Optimize AMI and container startup times.

Challenge: Over-Scaling and Resource Waste

Solution: Implement intelligent cooldown periods and scaling policies. Use multi-dimensional scaling metrics. Monitor and tune scaling thresholds regularly.

Challenge: Complex Multi-Service Dependencies

Solution: Implement orchestrated scaling with dependency management. Use health checks and validation at each scaling step. Create rollback mechanisms for failed scaling operations.

Challenge: Cost vs. Performance Trade-offs

Solution: Implement cost-aware scaling policies with budget constraints. Use mixed instance types and pricing models. Monitor cost per unit of work metrics.

Challenge: Unpredictable Scaling Behavior

Solution: Use comprehensive monitoring and logging of scaling events. Implement gradual scaling with validation steps. Use machine learning for pattern recognition and prediction.