14. Chapter 14: Performance Optimization and Caching#
14.1. Learning Objectives#
By the end of this chapter, you will understand:
Performance optimization strategies for Web GIS applications
Implementing effective caching mechanisms for geospatial data
Frontend optimization techniques for map rendering and interactions
Backend optimization for spatial queries and data processing
CDN configuration and edge computing for global performance
Performance monitoring and bottleneck identification
14.2. Performance Challenges in Web GIS#
Web GIS applications face unique performance challenges compared to traditional web applications. Geospatial data is inherently large and complex, map rendering requires intensive graphics processing, and users expect real-time interactions with massive datasets. Understanding these challenges is essential for building responsive and scalable Web GIS applications.
14.2.1. Understanding Geospatial Performance Bottlenecks#
Large Dataset Handling: Geospatial datasets often contain millions of features with complex geometries. A single shapefile can contain detailed administrative boundaries with thousands of vertices per polygon, creating significant challenges for data transfer, storage, and rendering. Traditional web optimization techniques must be adapted to handle the unique characteristics of spatial data.
Rendering Performance: Map rendering involves complex operations including coordinate transformations, geometric simplification, and graphics rendering. Each zoom level may require different levels of detail, and smooth pan and zoom interactions demand consistent frame rates. Modern mapping libraries use techniques like tile caching, vector tiles, and WebGL acceleration to address these challenges.
Real-time Data Processing: Many Web GIS applications require real-time or near-real-time data processing for applications like vehicle tracking, environmental monitoring, or emergency response. This demands efficient data pipelines, optimized database queries, and effective caching strategies to maintain responsiveness.
Network Latency and Bandwidth: Geospatial applications often serve global audiences with varying network conditions. Large tile downloads, complex vector datasets, and frequent API calls can create poor user experiences for users with limited bandwidth or high latency connections.
Memory Management: Client-side applications must manage memory efficiently when handling large datasets, multiple map layers, and complex visualizations. Memory leaks can quickly degrade performance, particularly in applications that allow users to load and interact with multiple datasets simultaneously.
14.2.2. Performance Optimization Strategy#
Layered Optimization Approach: Effective Web GIS performance optimization requires a comprehensive strategy addressing multiple layers of the application stack. This includes frontend optimizations for rendering and user interactions, backend optimizations for data processing and query performance, network optimizations for data transfer, and infrastructure optimizations for scalability and reliability.
Data-Driven Optimization: Performance optimization should be guided by real-world usage patterns and performance metrics. Understanding how users interact with your application, which features are accessed most frequently, and where performance bottlenecks occur enables targeted optimization efforts that provide the greatest impact.
Progressive Enhancement: Applications should provide a baseline experience that works across all devices and network conditions, with enhanced features for users with better hardware and connectivity. This approach ensures accessibility while taking advantage of modern capabilities where available.
14.3. Frontend Performance Optimization#
14.3.1. Map Rendering Optimization#
// client/src/services/mapOptimizationService.ts
interface MapPerformanceConfig {
maxFeatures: number;
simplificationTolerance: number;
clusterThreshold: number;
tileSize: number;
maxZoom: number;
vectorTileUrl: string;
rasterFallbackUrl: string;
}
interface PerformanceMetrics {
renderTime: number;
featureCount: number;
memoryUsage: number;
frameRate: number;
loadTime: number;
}
export class MapOptimizationService {
private config: MapPerformanceConfig;
private metrics: PerformanceMetrics[] = [];
private renderingQueue: Array<() => void> = [];
private isProcessing = false;
constructor(config: MapPerformanceConfig) {
this.config = config;
this.startPerformanceMonitoring();
}
// Implement level-of-detail rendering
optimizeLayerForZoom(layer: any, zoom: number): any {
const startTime = performance.now();
// Determine appropriate level of detail based on zoom
const detailLevel = this.calculateDetailLevel(zoom);
// Apply geometric simplification
const simplifiedLayer = this.simplifyGeometry(layer, detailLevel);
// Apply feature filtering based on importance
const filteredLayer = this.filterFeaturesByImportance(simplifiedLayer, detailLevel);
// Apply clustering for point data at low zoom levels
const optimizedLayer = zoom < 10 && layer.type === 'point'
? this.clusterFeatures(filteredLayer)
: filteredLayer;
const renderTime = performance.now() - startTime;
this.recordMetrics({
renderTime,
featureCount: optimizedLayer.features?.length || 0,
memoryUsage: this.estimateMemoryUsage(optimizedLayer),
frameRate: 0, // Will be updated by animation frame monitoring
loadTime: 0
});
return optimizedLayer;
}
// Implement progressive loading
async loadDataProgressively(
bounds: [number, number, number, number],
zoom: number,
onProgress: (loaded: number, total: number) => void
): Promise<any[]> {
const tiles = this.calculateRequiredTiles(bounds, zoom);
const results: any[] = [];
// Sort tiles by importance (center tiles first)
const sortedTiles = this.prioritizeTiles(tiles, bounds);
// Load tiles in batches to avoid overwhelming the network
const batchSize = 4;
for (let i = 0; i < sortedTiles.length; i += batchSize) {
const batch = sortedTiles.slice(i, i + batchSize);
const batchPromises = batch.map(async (tile) => {
try {
const data = await this.loadTile(tile);
return this.optimizeLayerForZoom(data, zoom);
} catch (error) {
console.warn(`Failed to load tile ${tile.x},${tile.y}:`, error);
return null;
}
});
const batchResults = await Promise.allSettled(batchPromises);
batchResults.forEach((result) => {
if (result.status === 'fulfilled' && result.value) {
results.push(result.value);
}
});
onProgress(results.length, sortedTiles.length);
// Allow browser to process other tasks between batches
await this.yieldToMainThread();
}
return results;
}
// Implement efficient feature clustering
clusterFeatures(layer: any): any {
if (!layer.features || layer.features.length < this.config.clusterThreshold) {
return layer;
}
const clusters = new Map<string, any[]>();
const gridSize = this.calculateClusterGridSize();
// Group features by grid cells
layer.features.forEach((feature: any) => {
const point = this.getFeatureCenter(feature);
const gridKey = this.getGridKey(point, gridSize);
if (!clusters.has(gridKey)) {
clusters.set(gridKey, []);
}
clusters.get(gridKey)!.push(feature);
});
// Create cluster features
const clusteredFeatures = Array.from(clusters.entries()).map(([gridKey, features]) => {
if (features.length === 1) {
return features[0];
}
const center = this.calculateClusterCenter(features);
return {
type: 'Feature',
properties: {
cluster: true,
point_count: features.length,
clusterId: gridKey,
features: features.slice(0, 10) // Store sample of features
},
geometry: {
type: 'Point',
coordinates: center
}
};
});
return {
...layer,
features: clusteredFeatures
};
}
// Implement geometry simplification
simplifyGeometry(layer: any, tolerance: number): any {
if (!layer.features) return layer;
const simplifiedFeatures = layer.features.map((feature: any) => {
if (!feature.geometry || feature.geometry.type === 'Point') {
return feature;
}
try {
// Use simplified version of Douglas-Peucker algorithm
const simplifiedGeometry = this.douglasPeucker(feature.geometry, tolerance);
return {
...feature,
geometry: simplifiedGeometry
};
} catch (error) {
console.warn('Failed to simplify geometry:', error);
return feature;
}
});
return {
...layer,
features: simplifiedFeatures
};
}
// Implement request batching
private batchRequests<T>(
requests: Array<() => Promise<T>>,
batchSize: number = 5,
delayMs: number = 100
): Promise<T[]> {
return new Promise((resolve) => {
const results: T[] = [];
let currentBatch = 0;
const processBatch = async () => {
const start = currentBatch * batchSize;
const end = Math.min(start + batchSize, requests.length);
const batch = requests.slice(start, end);
try {
const batchResults = await Promise.allSettled(
batch.map(request => request())
);
batchResults.forEach((result) => {
if (result.status === 'fulfilled') {
results.push(result.value);
}
});
} catch (error) {
console.error('Batch processing error:', error);
}
currentBatch++;
if (currentBatch * batchSize < requests.length) {
setTimeout(processBatch, delayMs);
} else {
resolve(results);
}
};
processBatch();
});
}
// Memory management utilities
cleanupUnusedResources(): void {
// Remove cached data outside current viewport
this.cleanupViewportCache();
// Force garbage collection if available (development)
if (typeof window !== 'undefined' && window.gc) {
window.gc();
}
// Clear old performance metrics
if (this.metrics.length > 1000) {
this.metrics = this.metrics.slice(-500);
}
}
// Performance monitoring
private startPerformanceMonitoring(): void {
// Monitor frame rate
let lastFrameTime = performance.now();
let frameCount = 0;
const measureFrameRate = () => {
const currentTime = performance.now();
frameCount++;
if (currentTime - lastFrameTime >= 1000) {
const fps = frameCount;
frameCount = 0;
lastFrameTime = currentTime;
// Update latest metrics with frame rate
if (this.metrics.length > 0) {
this.metrics[this.metrics.length - 1].frameRate = fps;
}
}
requestAnimationFrame(measureFrameRate);
};
requestAnimationFrame(measureFrameRate);
// Monitor memory usage
setInterval(() => {
if ('memory' in performance) {
const memory = (performance as any).memory;
console.debug('Memory usage:', {
used: Math.round(memory.usedJSHeapSize / 1024 / 1024) + 'MB',
total: Math.round(memory.totalJSHeapSize / 1024 / 1024) + 'MB',
limit: Math.round(memory.jsHeapSizeLimit / 1024 / 1024) + 'MB'
});
}
}, 10000);
}
// Utility methods
private calculateDetailLevel(zoom: number): number {
if (zoom > 15) return 1.0;
if (zoom > 10) return 0.1;
if (zoom > 5) return 0.01;
return 0.001;
}
private simplificationTolerance(zoom: number): number {
return Math.max(0.0001, this.config.simplificationTolerance / Math.pow(2, zoom));
}
private calculateRequiredTiles(
bounds: [number, number, number, number],
zoom: number
): Array<{x: number, y: number, z: number}> {
const [west, south, east, north] = bounds;
const tiles: Array<{x: number, y: number, z: number}> = [];
const minTileX = Math.floor((west + 180) / 360 * Math.pow(2, zoom));
const maxTileX = Math.floor((east + 180) / 360 * Math.pow(2, zoom));
const minTileY = Math.floor((1 - Math.log(Math.tan(north * Math.PI / 180) + 1 / Math.cos(north * Math.PI / 180)) / Math.PI) / 2 * Math.pow(2, zoom));
const maxTileY = Math.floor((1 - Math.log(Math.tan(south * Math.PI / 180) + 1 / Math.cos(south * Math.PI / 180)) / Math.PI) / 2 * Math.pow(2, zoom));
for (let x = minTileX; x <= maxTileX; x++) {
for (let y = minTileY; y <= maxTileY; y++) {
tiles.push({ x, y, z: zoom });
}
}
return tiles;
}
private prioritizeTiles(
tiles: Array<{x: number, y: number, z: number}>,
bounds: [number, number, number, number]
): Array<{x: number, y: number, z: number}> {
const [west, south, east, north] = bounds;
const centerX = (west + east) / 2;
const centerY = (south + north) / 2;
return tiles.sort((a, b) => {
const aDistance = this.tileDistanceFromCenter(a, centerX, centerY);
const bDistance = this.tileDistanceFromCenter(b, centerX, centerY);
return aDistance - bDistance;
});
}
private tileDistanceFromCenter(
tile: {x: number, y: number, z: number},
centerX: number,
centerY: number
): number {
const tileSize = 360 / Math.pow(2, tile.z);
const tileCenterX = tile.x * tileSize - 180 + tileSize / 2;
const tileCenterY = 85.0511 - tile.y * tileSize * 170.1022 / Math.pow(2, tile.z);
return Math.sqrt(
Math.pow(tileCenterX - centerX, 2) + Math.pow(tileCenterY - centerY, 2)
);
}
private async loadTile(tile: {x: number, y: number, z: number}): Promise<any> {
const url = this.config.vectorTileUrl
.replace('{z}', tile.z.toString())
.replace('{x}', tile.x.toString())
.replace('{y}', tile.y.toString());
const response = await fetch(url);
if (!response.ok) {
throw new Error(`Failed to load tile: ${response.statusText}`);
}
return response.json();
}
private async yieldToMainThread(): Promise<void> {
return new Promise(resolve => setTimeout(resolve, 0));
}
private getFeatureCenter(feature: any): [number, number] {
if (feature.geometry.type === 'Point') {
return feature.geometry.coordinates;
}
// Calculate centroid for other geometry types
const bbox = this.calculateBoundingBox(feature.geometry);
return [
(bbox[0] + bbox[2]) / 2,
(bbox[1] + bbox[3]) / 2
];
}
private calculateBoundingBox(geometry: any): [number, number, number, number] {
// Simplified bounding box calculation
let minX = Infinity, minY = Infinity, maxX = -Infinity, maxY = -Infinity;
const processCoordinate = (coord: number[]) => {
minX = Math.min(minX, coord[0]);
maxX = Math.max(maxX, coord[0]);
minY = Math.min(minY, coord[1]);
maxY = Math.max(maxY, coord[1]);
};
const processCoordinates = (coords: any) => {
if (Array.isArray(coords[0])) {
coords.forEach((coord: any) => processCoordinates(coord));
} else {
processCoordinate(coords);
}
};
processCoordinates(geometry.coordinates);
return [minX, minY, maxX, maxY];
}
private getGridKey(point: [number, number], gridSize: number): string {
const x = Math.floor(point[0] / gridSize);
const y = Math.floor(point[1] / gridSize);
return `${x},${y}`;
}
private calculateClusterGridSize(): number {
// Grid size should be based on current zoom level and screen size
return 0.01; // Simplified implementation
}
private calculateClusterCenter(features: any[]): [number, number] {
const sum = features.reduce(
(acc, feature) => {
const center = this.getFeatureCenter(feature);
return [acc[0] + center[0], acc[1] + center[1]];
},
[0, 0]
);
return [sum[0] / features.length, sum[1] / features.length];
}
private douglasPeucker(geometry: any, tolerance: number): any {
// Simplified Douglas-Peucker implementation
if (!geometry.coordinates || geometry.type === 'Point') {
return geometry;
}
const simplifyLineString = (coords: number[][]): number[][] => {
if (coords.length <= 2) return coords;
// Find the point with maximum distance from line segment
let maxDistance = 0;
let maxIndex = 0;
for (let i = 1; i < coords.length - 1; i++) {
const distance = this.pointLineDistance(
coords[i],
coords[0],
coords[coords.length - 1]
);
if (distance > maxDistance) {
maxDistance = distance;
maxIndex = i;
}
}
// If max distance is greater than tolerance, recursively simplify
if (maxDistance > tolerance) {
const left = simplifyLineString(coords.slice(0, maxIndex + 1));
const right = simplifyLineString(coords.slice(maxIndex));
return left.slice(0, -1).concat(right);
} else {
return [coords[0], coords[coords.length - 1]];
}
};
switch (geometry.type) {
case 'LineString':
return {
...geometry,
coordinates: simplifyLineString(geometry.coordinates)
};
case 'Polygon':
return {
...geometry,
coordinates: geometry.coordinates.map((ring: number[][]) =>
simplifyLineString(ring)
)
};
default:
return geometry;
}
}
private pointLineDistance(
point: number[],
lineStart: number[],
lineEnd: number[]
): number {
const A = point[0] - lineStart[0];
const B = point[1] - lineStart[1];
const C = lineEnd[0] - lineStart[0];
const D = lineEnd[1] - lineStart[1];
const dot = A * C + B * D;
const lenSq = C * C + D * D;
if (lenSq === 0) return Math.sqrt(A * A + B * B);
let param = dot / lenSq;
if (param < 0) {
return Math.sqrt(A * A + B * B);
} else if (param > 1) {
const E = point[0] - lineEnd[0];
const F = point[1] - lineEnd[1];
return Math.sqrt(E * E + F * F);
} else {
const x = lineStart[0] + param * C;
const y = lineStart[1] + param * D;
const dx = point[0] - x;
const dy = point[1] - y;
return Math.sqrt(dx * dx + dy * dy);
}
}
private filterFeaturesByImportance(layer: any, detailLevel: number): any {
if (!layer.features || detailLevel >= 1.0) return layer;
// Sort features by importance (area, population, etc.)
const sortedFeatures = layer.features.sort((a: any, b: any) => {
const aImportance = this.calculateFeatureImportance(a);
const bImportance = this.calculateFeatureImportance(b);
return bImportance - aImportance;
});
// Keep only the most important features based on detail level
const keepCount = Math.ceil(sortedFeatures.length * detailLevel);
return {
...layer,
features: sortedFeatures.slice(0, keepCount)
};
}
private calculateFeatureImportance(feature: any): number {
// Calculate importance based on properties and geometry
let importance = 1;
// Increase importance based on population, area, etc.
if (feature.properties?.population) {
importance += Math.log10(feature.properties.population);
}
if (feature.properties?.area) {
importance += Math.log10(feature.properties.area);
}
// Increase importance for administrative boundaries
if (feature.properties?.admin_level) {
importance += (10 - feature.properties.admin_level);
}
return importance;
}
private estimateMemoryUsage(layer: any): number {
// Rough estimation of memory usage in bytes
return JSON.stringify(layer).length * 2; // UTF-16 characters
}
private cleanupViewportCache(): void {
// Implementation would depend on specific caching strategy
console.debug('Cleaning up viewport cache');
}
private recordMetrics(metrics: PerformanceMetrics): void {
this.metrics.push({
...metrics,
timestamp: Date.now()
} as any);
// Keep only recent metrics
if (this.metrics.length > 100) {
this.metrics.shift();
}
}
getPerformanceReport(): any {
if (this.metrics.length === 0) return null;
const recent = this.metrics.slice(-10);
return {
averageRenderTime: recent.reduce((sum, m) => sum + m.renderTime, 0) / recent.length,
averageFrameRate: recent.reduce((sum, m) => sum + m.frameRate, 0) / recent.length,
averageFeatureCount: recent.reduce((sum, m) => sum + m.featureCount, 0) / recent.length,
totalMemoryUsage: recent[recent.length - 1]?.memoryUsage || 0,
metricsCount: this.metrics.length
};
}
}
14.3.2. Component-Level Optimization#
// client/src/components/optimized/OptimizedMapComponent.tsx
import React, { useMemo, useCallback, memo, useRef, useEffect } from 'react';
import { Map, Source, Layer } from 'react-map-gl';
import { MapOptimizationService } from '../../services/mapOptimizationService';
interface OptimizedMapProps {
data: any[];
viewport: {
longitude: number;
latitude: number;
zoom: number;
};
onViewportChange: (viewport: any) => void;
config?: any;
}
// Memoize expensive calculations
const useOptimizedMapData = (data: any[], zoom: number) => {
return useMemo(() => {
const optimizer = new MapOptimizationService({
maxFeatures: 10000,
simplificationTolerance: 0.001,
clusterThreshold: 100,
tileSize: 512,
maxZoom: 18,
vectorTileUrl: '',
rasterFallbackUrl: ''
});
return data.map(layer => optimizer.optimizeLayerForZoom(layer, zoom));
}, [data, zoom]);
};
// Throttled viewport change handler
const useThrottledViewportChange = (
onViewportChange: (viewport: any) => void,
delay: number = 100
) => {
const timeoutRef = useRef<NodeJS.Timeout>();
return useCallback((viewport: any) => {
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
timeoutRef.current = setTimeout(() => {
onViewportChange(viewport);
}, delay);
}, [onViewportChange, delay]);
};
// Virtualized layer rendering
const VirtualizedLayers: React.FC<{
layers: any[];
viewport: any;
maxLayers: number;
}> = memo(({ layers, viewport, maxLayers }) => {
// Only render layers visible in viewport and limit total number
const visibleLayers = useMemo(() => {
const sorted = layers
.filter(layer => isLayerVisible(layer, viewport))
.sort((a, b) => getLayerPriority(b, viewport) - getLayerPriority(a, viewport));
return sorted.slice(0, maxLayers);
}, [layers, viewport, maxLayers]);
return (
<>
{visibleLayers.map((layer, index) => (
<Source key={layer.id} id={layer.id} type="geojson" data={layer.data}>
<Layer {...layer.style} />
</Source>
))}
</>
);
});
// Main optimized map component
export const OptimizedMapComponent: React.FC<OptimizedMapProps> = memo(({
data,
viewport,
onViewportChange,
config = {}
}) => {
const mapRef = useRef<any>();
const optimizationService = useRef<MapOptimizationService>();
// Initialize optimization service
useEffect(() => {
optimizationService.current = new MapOptimizationService({
maxFeatures: 10000,
simplificationTolerance: 0.001,
clusterThreshold: 100,
tileSize: 512,
maxZoom: 18,
vectorTileUrl: config.vectorTileUrl || '',
rasterFallbackUrl: config.rasterFallbackUrl || ''
});
return () => {
optimizationService.current?.cleanupUnusedResources();
};
}, [config]);
// Optimize data based on zoom level
const optimizedData = useOptimizedMapData(data, viewport.zoom);
// Throttle viewport changes to reduce re-renders
const throttledViewportChange = useThrottledViewportChange(onViewportChange);
// Memoize map style to prevent unnecessary re-renders
const mapStyle = useMemo(() => ({
version: 8,
sources: {},
layers: []
}), []);
// Handle viewport changes with performance monitoring
const handleViewportChange = useCallback((newViewport: any) => {
const startTime = performance.now();
throttledViewportChange(newViewport);
// Log performance metrics
const duration = performance.now() - startTime;
console.debug('Viewport change duration:', duration);
}, [throttledViewportChange]);
// Cleanup on unmount
useEffect(() => {
return () => {
optimizationService.current?.cleanupUnusedResources();
};
}, []);
return (
<div style={{ width: '100%', height: '100%', position: 'relative' }}>
<Map
ref={mapRef}
{...viewport}
onMove={evt => handleViewportChange(evt.viewState)}
mapStyle={mapStyle}
style={{ width: '100%', height: '100%' }}
maxZoom={18}
minZoom={1}
// Performance optimizations
preserveDrawingBuffer={false}
antialias={false}
powerPreference="high-performance"
>
<VirtualizedLayers
layers={optimizedData}
viewport={viewport}
maxLayers={config.maxLayers || 20}
/>
</Map>
{/* Performance indicator */}
<PerformanceIndicator service={optimizationService.current} />
</div>
);
});
// Performance monitoring component
const PerformanceIndicator: React.FC<{
service?: MapOptimizationService;
}> = ({ service }) => {
const [report, setReport] = React.useState<any>(null);
useEffect(() => {
if (!service) return;
const interval = setInterval(() => {
const performanceReport = service.getPerformanceReport();
setReport(performanceReport);
}, 1000);
return () => clearInterval(interval);
}, [service]);
if (!report || process.env.NODE_ENV === 'production') {
return null;
}
return (
<div style={{
position: 'absolute',
top: 10,
right: 10,
background: 'rgba(0,0,0,0.7)',
color: 'white',
padding: '8px',
borderRadius: '4px',
fontSize: '12px',
fontFamily: 'monospace'
}}>
<div>FPS: {Math.round(report.averageFrameRate)}</div>
<div>Render: {Math.round(report.averageRenderTime)}ms</div>
<div>Features: {report.averageFeatureCount}</div>
<div>Memory: {Math.round(report.totalMemoryUsage / 1024)}KB</div>
</div>
);
};
// Utility functions
function isLayerVisible(layer: any, viewport: any): boolean {
if (!layer.bounds) return true;
const [west, south, east, north] = layer.bounds;
const padding = 0.1; // 10% padding for smooth transitions
return !(
viewport.longitude + padding < west ||
viewport.longitude - padding > east ||
viewport.latitude + padding < south ||
viewport.latitude - padding > north
);
}
function getLayerPriority(layer: any, viewport: any): number {
let priority = layer.priority || 0;
// Increase priority for layers closer to viewport center
if (layer.bounds) {
const [west, south, east, north] = layer.bounds;
const layerCenterX = (west + east) / 2;
const layerCenterY = (south + north) / 2;
const distance = Math.sqrt(
Math.pow(layerCenterX - viewport.longitude, 2) +
Math.pow(layerCenterY - viewport.latitude, 2)
);
priority += Math.max(0, 10 - distance);
}
return priority;
}
OptimizedMapComponent.displayName = 'OptimizedMapComponent';
14.4. Backend Performance Optimization#
14.4.1. Database Query Optimization#
-- Spatial indexing and query optimization
-- Create spatial indexes for geometry columns
CREATE INDEX idx_features_geom ON features USING GIST(geom);
CREATE INDEX idx_features_bounds ON features USING GIST(ST_Envelope(geom));
-- Create composite indexes for common query patterns
CREATE INDEX idx_features_type_geom ON features (feature_type) INCLUDE (geom);
CREATE INDEX idx_features_updated_geom ON features (updated_at DESC) INCLUDE (geom);
-- Partitioning for large datasets
CREATE TABLE features_partitioned (
id SERIAL PRIMARY KEY,
feature_type VARCHAR(50) NOT NULL,
geom GEOMETRY(GEOMETRY, 4326),
properties JSONB,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
) PARTITION BY HASH (id);
-- Create partitions
CREATE TABLE features_part_0 PARTITION OF features_partitioned
FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE features_part_1 PARTITION OF features_partitioned
FOR VALUES WITH (MODULUS 4, REMAINDER 1);
CREATE TABLE features_part_2 PARTITION OF features_partitioned
FOR VALUES WITH (MODULUS 4, REMAINDER 2);
CREATE TABLE features_part_3 PARTITION OF features_partitioned
FOR VALUES WITH (MODULUS 4, REMAINDER 3);
-- Spatial indexes on partitions
CREATE INDEX idx_features_part_0_geom ON features_part_0 USING GIST(geom);
CREATE INDEX idx_features_part_1_geom ON features_part_1 USING GIST(geom);
CREATE INDEX idx_features_part_2_geom ON features_part_2 USING GIST(geom);
CREATE INDEX idx_features_part_3_geom ON features_part_3 USING GIST(geom);
-- Materialized views for common aggregations
CREATE MATERIALIZED VIEW feature_summary_by_type AS
SELECT
feature_type,
COUNT(*) as feature_count,
ST_Extent(geom) as bounds,
AVG(ST_Area(geom)) as avg_area,
MIN(created_at) as first_created,
MAX(updated_at) as last_updated
FROM features
GROUP BY feature_type;
CREATE INDEX idx_feature_summary_type ON feature_summary_by_type (feature_type);
-- Refresh materialized view periodically
CREATE OR REPLACE FUNCTION refresh_feature_summary()
RETURNS void AS $$
BEGIN
REFRESH MATERIALIZED VIEW CONCURRENTLY feature_summary_by_type;
END;
$$ LANGUAGE plpgsql;
-- Optimized spatial queries with proper indexing
-- Bounding box query with spatial index
EXPLAIN (ANALYZE, BUFFERS)
SELECT id, feature_type, ST_AsGeoJSON(geom) as geometry, properties
FROM features
WHERE geom && ST_MakeEnvelope($1, $2, $3, $4, 4326)
AND ST_Intersects(geom, ST_MakeEnvelope($1, $2, $3, $4, 4326))
LIMIT 1000;
-- Optimized nearest neighbor query
EXPLAIN (ANALYZE, BUFFERS)
SELECT id, feature_type, ST_AsGeoJSON(geom) as geometry, properties,
ST_Distance(geom, ST_SetSRID(ST_Point($1, $2), 4326)) as distance
FROM features
WHERE geom && ST_DWithin(ST_SetSRID(ST_Point($1, $2), 4326), 1000)
ORDER BY geom <-> ST_SetSRID(ST_Point($1, $2), 4326)
LIMIT 10;
-- Clustering query for map visualization
CREATE OR REPLACE FUNCTION cluster_points(
bbox GEOMETRY,
zoom_level INTEGER,
cluster_distance FLOAT DEFAULT 50
)
RETURNS TABLE(
cluster_id INTEGER,
point_count INTEGER,
geom GEOMETRY,
sample_properties JSONB
) AS $$
BEGIN
RETURN QUERY
WITH clustered AS (
SELECT
ST_ClusterKMeans(geom,
CASE
WHEN COUNT(*) OVER() > 1000 THEN 100
WHEN COUNT(*) OVER() > 100 THEN 50
ELSE 10
END
) OVER() as cluster_id,
geom,
properties
FROM features
WHERE geom && bbox
)
SELECT
c.cluster_id::INTEGER,
COUNT(*)::INTEGER as point_count,
ST_Centroid(ST_Collect(c.geom)) as geom,
(array_agg(c.properties))[1] as sample_properties
FROM clustered c
GROUP BY c.cluster_id;
END;
$$ LANGUAGE plpgsql;
14.4.2. Caching Strategies#
// server/src/services/cacheService.ts
import Redis from 'ioredis';
import { LRUCache } from 'lru-cache';
interface CacheOptions {
ttl: number;
maxSize?: number;
compressionThreshold?: number;
}
interface SpatialCacheKey {
type: 'bbox' | 'tile' | 'feature' | 'query';
bounds?: [number, number, number, number];
zoom?: number;
x?: number;
y?: number;
z?: number;
featureId?: string;
queryHash?: string;
}
export class SpatialCacheService {
private redis: Redis;
private memoryCache: LRUCache<string, any>;
private compressionThreshold: number;
constructor(redisUrl: string, options: CacheOptions = { ttl: 3600 }) {
this.redis = new Redis(redisUrl);
this.compressionThreshold = options.compressionThreshold || 1024; // 1KB
// Memory cache for frequently accessed small items
this.memoryCache = new LRUCache({
max: options.maxSize || 1000,
ttl: options.ttl * 1000, // Convert to milliseconds
updateAgeOnGet: true,
allowStale: false
});
}
// Generate cache keys for spatial data
generateSpatialKey(keyData: SpatialCacheKey): string {
switch (keyData.type) {
case 'bbox':
const [west, south, east, north] = keyData.bounds!;
return `bbox:${west.toFixed(6)},${south.toFixed(6)},${east.toFixed(6)},${north.toFixed(6)}:${keyData.zoom}`;
case 'tile':
return `tile:${keyData.z}/${keyData.x}/${keyData.y}`;
case 'feature':
return `feature:${keyData.featureId}`;
case 'query':
return `query:${keyData.queryHash}`;
default:
throw new Error(`Unknown cache key type: ${keyData.type}`);
}
}
// Multi-level caching: memory -> Redis -> database
async get<T>(key: string): Promise<T | null> {
try {
// Check memory cache first
const memoryResult = this.memoryCache.get(key);
if (memoryResult !== undefined) {
return memoryResult as T;
}
// Check Redis cache
const redisResult = await this.redis.get(key);
if (redisResult) {
const data = await this.decompress(redisResult);
const parsed = JSON.parse(data);
// Store in memory cache for faster future access
this.memoryCache.set(key, parsed);
return parsed as T;
}
return null;
} catch (error) {
console.error('Cache get error:', error);
return null;
}
}
async set<T>(key: string, value: T, ttl?: number): Promise<void> {
try {
const serialized = JSON.stringify(value);
const compressed = await this.compress(serialized);
// Store in both memory and Redis
this.memoryCache.set(key, value);
if (ttl) {
await this.redis.setex(key, ttl, compressed);
} else {
await this.redis.set(key, compressed);
}
} catch (error) {
console.error('Cache set error:', error);
}
}
// Spatial-aware cache invalidation
async invalidateSpatialRegion(bounds: [number, number, number, number]): Promise<void> {
try {
const [west, south, east, north] = bounds;
// Find all cache keys that intersect with this region
const pattern = 'bbox:*';
const keys = await this.redis.keys(pattern);
const keysToInvalidate: string[] = [];
for (const key of keys) {
const keyBounds = this.extractBoundsFromKey(key);
if (keyBounds && this.boundsIntersect(bounds, keyBounds)) {
keysToInvalidate.push(key);
}
}
// Batch delete
if (keysToInvalidate.length > 0) {
await this.redis.del(...keysToInvalidate);
// Remove from memory cache
keysToInvalidate.forEach(key => {
this.memoryCache.delete(key);
});
}
} catch (error) {
console.error('Spatial cache invalidation error:', error);
}
}
// Tile-based caching for map data
async getTile(x: number, y: number, z: number): Promise<Buffer | null> {
const key = this.generateSpatialKey({ type: 'tile', x, y, z });
try {
const cached = await this.redis.getBuffer(key);
return cached;
} catch (error) {
console.error('Tile cache get error:', error);
return null;
}
}
async setTile(x: number, y: number, z: number, data: Buffer, ttl: number = 3600): Promise<void> {
const key = this.generateSpatialKey({ type: 'tile', x, y, z });
try {
await this.redis.setex(key, ttl, data);
} catch (error) {
console.error('Tile cache set error:', error);
}
}
// Query result caching with automatic invalidation
async getCachedQuery<T>(
query: string,
params: any[],
ttl: number = 300
): Promise<T | null> {
const queryHash = this.hashQuery(query, params);
const key = this.generateSpatialKey({ type: 'query', queryHash });
return this.get<T>(key);
}
async setCachedQuery<T>(
query: string,
params: any[],
result: T,
ttl: number = 300
): Promise<void> {
const queryHash = this.hashQuery(query, params);
const key = this.generateSpatialKey({ type: 'query', queryHash });
await this.set(key, result, ttl);
}
// Batch operations for improved performance
async mget<T>(keys: string[]): Promise<(T | null)[]> {
try {
// Check memory cache first
const results: (T | null)[] = new Array(keys.length);
const redisKeys: string[] = [];
const redisIndexes: number[] = [];
for (let i = 0; i < keys.length; i++) {
const memoryResult = this.memoryCache.get(keys[i]);
if (memoryResult !== undefined) {
results[i] = memoryResult as T;
} else {
redisKeys.push(keys[i]);
redisIndexes.push(i);
}
}
// Fetch remaining from Redis
if (redisKeys.length > 0) {
const redisResults = await this.redis.mget(...redisKeys);
for (let i = 0; i < redisResults.length; i++) {
const redisResult = redisResults[i];
const originalIndex = redisIndexes[i];
if (redisResult) {
const data = await this.decompress(redisResult);
const parsed = JSON.parse(data);
results[originalIndex] = parsed as T;
// Store in memory cache
this.memoryCache.set(keys[originalIndex], parsed);
} else {
results[originalIndex] = null;
}
}
}
return results;
} catch (error) {
console.error('Cache mget error:', error);
return new Array(keys.length).fill(null);
}
}
async mset<T>(keyValuePairs: Array<{key: string, value: T}>, ttl?: number): Promise<void> {
try {
const pipeline = this.redis.pipeline();
for (const { key, value } of keyValuePairs) {
const serialized = JSON.stringify(value);
const compressed = await this.compress(serialized);
// Store in memory cache
this.memoryCache.set(key, value);
if (ttl) {
pipeline.setex(key, ttl, compressed);
} else {
pipeline.set(key, compressed);
}
}
await pipeline.exec();
} catch (error) {
console.error('Cache mset error:', error);
}
}
// Cache warming for frequently accessed data
async warmCache(warmingStrategy: 'tiles' | 'regions' | 'popular', config: any): Promise<void> {
try {
switch (warmingStrategy) {
case 'tiles':
await this.warmTileCache(config);
break;
case 'regions':
await this.warmRegionCache(config);
break;
case 'popular':
await this.warmPopularDataCache(config);
break;
}
} catch (error) {
console.error('Cache warming error:', error);
}
}
private async warmTileCache(config: {
bounds: [number, number, number, number];
minZoom: number;
maxZoom: number;
}): Promise<void> {
const { bounds, minZoom, maxZoom } = config;
for (let z = minZoom; z <= maxZoom; z++) {
const tiles = this.calculateTilesForBounds(bounds, z);
// Process tiles in batches to avoid overwhelming the system
const batchSize = 10;
for (let i = 0; i < tiles.length; i += batchSize) {
const batch = tiles.slice(i, i + batchSize);
await Promise.allSettled(
batch.map(async (tile) => {
const cached = await this.getTile(tile.x, tile.y, tile.z);
if (!cached) {
// Generate and cache tile data
const tileData = await this.generateTileData(tile.x, tile.y, tile.z);
if (tileData) {
await this.setTile(tile.x, tile.y, tile.z, tileData);
}
}
})
);
// Small delay to prevent overwhelming the system
await new Promise(resolve => setTimeout(resolve, 100));
}
}
}
private async warmRegionCache(config: { regions: string[] }): Promise<void> {
// Implementation for warming region-specific cache
console.log('Warming region cache for:', config.regions);
}
private async warmPopularDataCache(config: { limit: number }): Promise<void> {
// Implementation for warming cache with popular data
console.log('Warming popular data cache, limit:', config.limit);
}
// Utility methods
private async compress(data: string): Promise<string> {
if (data.length < this.compressionThreshold) {
return data;
}
// Use gzip compression for larger data
const zlib = require('zlib');
const compressed = zlib.gzipSync(Buffer.from(data));
return compressed.toString('base64');
}
private async decompress(data: string): Promise<string> {
try {
// Try to decompress first
const zlib = require('zlib');
const buffer = Buffer.from(data, 'base64');
const decompressed = zlib.gunzipSync(buffer);
return decompressed.toString();
} catch {
// If decompression fails, assume it's uncompressed
return data;
}
}
private hashQuery(query: string, params: any[]): string {
const crypto = require('crypto');
const combined = query + JSON.stringify(params);
return crypto.createHash('md5').update(combined).digest('hex');
}
private extractBoundsFromKey(key: string): [number, number, number, number] | null {
const match = key.match(/bbox:([-\d.]+),([-\d.]+),([-\d.]+),([-\d.]+)/);
if (match) {
return [
parseFloat(match[1]),
parseFloat(match[2]),
parseFloat(match[3]),
parseFloat(match[4])
];
}
return null;
}
private boundsIntersect(
bounds1: [number, number, number, number],
bounds2: [number, number, number, number]
): boolean {
const [west1, south1, east1, north1] = bounds1;
const [west2, south2, east2, north2] = bounds2;
return !(
west1 > east2 ||
east1 < west2 ||
south1 > north2 ||
north1 < south2
);
}
private calculateTilesForBounds(
bounds: [number, number, number, number],
zoom: number
): Array<{x: number, y: number, z: number}> {
const [west, south, east, north] = bounds;
const tiles: Array<{x: number, y: number, z: number}> = [];
const minTileX = Math.floor((west + 180) / 360 * Math.pow(2, zoom));
const maxTileX = Math.floor((east + 180) / 360 * Math.pow(2, zoom));
const minTileY = Math.floor((1 - Math.log(Math.tan(north * Math.PI / 180) + 1 / Math.cos(north * Math.PI / 180)) / Math.PI) / 2 * Math.pow(2, zoom));
const maxTileY = Math.floor((1 - Math.log(Math.tan(south * Math.PI / 180) + 1 / Math.cos(south * Math.PI / 180)) / Math.PI) / 2 * Math.pow(2, zoom));
for (let x = minTileX; x <= maxTileX; x++) {
for (let y = minTileY; y <= maxTileY; y++) {
tiles.push({ x, y, z: zoom });
}
}
return tiles;
}
private async generateTileData(x: number, y: number, z: number): Promise<Buffer | null> {
// This would integrate with your tile generation service
// For now, return null to indicate no tile data available
return null;
}
// Performance monitoring
getCacheStats(): any {
return {
memoryCache: {
size: this.memoryCache.size,
maxSize: this.memoryCache.max,
hitRate: this.memoryCache.calculatedSize / this.memoryCache.max
},
redis: {
connected: this.redis.status === 'ready'
}
};
}
// Cleanup and shutdown
async shutdown(): Promise<void> {
this.memoryCache.clear();
await this.redis.quit();
}
}
14.5. CDN and Edge Optimization#
14.5.1. CDN Configuration#
// server/src/services/cdnService.ts
interface CDNConfig {
provider: 'cloudflare' | 'aws' | 'azure';
domains: string[];
tileEndpoints: string[];
cacheRules: CacheRule[];
compressionSettings: CompressionConfig;
geoReplication: GeoReplicationConfig;
}
interface CacheRule {
path: string;
ttl: number;
vary: string[];
browserTtl: number;
}
interface CompressionConfig {
enabled: boolean;
algorithms: ('gzip' | 'brotli' | 'deflate')[];
minSize: number;
mimeTypes: string[];
}
interface GeoReplicationConfig {
enabled: boolean;
regions: string[];
syncStrategy: 'immediate' | 'lazy' | 'scheduled';
}
export class CDNOptimizationService {
private config: CDNConfig;
constructor(config: CDNConfig) {
this.config = config;
}
// Generate optimized cache headers for different content types
generateCacheHeaders(contentType: string, path: string): Record<string, string> {
const rule = this.findMatchingCacheRule(path);
const headers: Record<string, string> = {};
// Set cache control based on content type
if (contentType.includes('tile') || path.includes('/tiles/')) {
// Map tiles - long cache with stale-while-revalidate
headers['Cache-Control'] = `public, max-age=${rule.ttl}, stale-while-revalidate=86400`;
headers['Vary'] = 'Accept-Encoding';
} else if (contentType === 'application/json') {
// API responses - shorter cache with revalidation
headers['Cache-Control'] = `public, max-age=${Math.min(rule.ttl, 300)}, must-revalidate`;
headers['Vary'] = 'Accept-Encoding, Authorization';
} else if (contentType.startsWith('image/')) {
// Static images - long cache
headers['Cache-Control'] = `public, max-age=${rule.ttl}, immutable`;
} else {
// Default caching
headers['Cache-Control'] = `public, max-age=${rule.ttl}`;
}
// Add ETags for efficient caching
headers['ETag'] = this.generateETag(contentType, path);
// Add compression hints
if (this.config.compressionSettings.enabled) {
headers['Accept-Encoding'] = this.config.compressionSettings.algorithms.join(', ');
}
return headers;
}
// Configure CDN for optimal geospatial content delivery
async configureCDN(): Promise<void> {
switch (this.config.provider) {
case 'cloudflare':
await this.configureCloudflare();
break;
case 'aws':
await this.configureAWS();
break;
case 'azure':
await this.configureAzure();
break;
}
}
private async configureCloudflare(): Promise<void> {
// Cloudflare-specific configuration
const rules = [
{
pattern: '*/tiles/*',
cache_level: 'cache_everything',
edge_cache_ttl: 86400,
browser_cache_ttl: 86400,
compression: 'gzip',
mirage: true, // Optimize images
polish: 'lossy' // Image optimization
},
{
pattern: '*/api/data/*',
cache_level: 'cache_everything',
edge_cache_ttl: 300,
browser_cache_ttl: 60,
bypass_cache_on_cookie: false
}
];
// Apply page rules via Cloudflare API
for (const rule of rules) {
await this.applyCloudflarePageRule(rule);
}
// Configure Argo Smart Routing for improved performance
await this.enableArgoSmartRouting();
// Configure Worker for dynamic content optimization
await this.deployOptimizationWorker();
}
private async configureAWS(): Promise<void> {
// CloudFront distribution configuration
const distributionConfig = {
CallerReference: Date.now().toString(),
Origins: {
Quantity: this.config.tileEndpoints.length,
Items: this.config.tileEndpoints.map((endpoint, index) => ({
Id: `origin-${index}`,
DomainName: endpoint,
CustomOriginConfig: {
HTTPPort: 443,
HTTPSPort: 443,
OriginProtocolPolicy: 'https-only',
OriginSslProtocols: {
Quantity: 1,
Items: ['TLSv1.2']
}
}
}))
},
DefaultCacheBehavior: {
TargetOriginId: 'origin-0',
ViewerProtocolPolicy: 'redirect-to-https',
CachePolicyId: this.getOptimizedCachePolicyId(),
Compress: true,
ResponseHeadersPolicyId: this.getSecurityHeadersPolicyId()
},
CacheBehaviors: {
Quantity: 2,
Items: [
{
PathPattern: '/tiles/*',
TargetOriginId: 'origin-0',
CachePolicyId: this.getLongTermCachePolicyId(),
ViewerProtocolPolicy: 'https-only',
Compress: true
},
{
PathPattern: '/api/*',
TargetOriginId: 'origin-0',
CachePolicyId: this.getAPIResponseCachePolicyId(),
ViewerProtocolPolicy: 'https-only',
Compress: true
}
]
},
PriceClass: 'PriceClass_All',
Enabled: true,
HttpVersion: 'http2'
};
// Create or update distribution
await this.createOrUpdateCloudFrontDistribution(distributionConfig);
}
private async configureAzure(): Promise<void> {
// Azure CDN configuration
const cdnProfile = {
location: 'Global',
sku: {
name: 'Premium_Verizon' // For advanced caching features
},
tags: {
service: 'webgis',
environment: process.env.NODE_ENV
}
};
await this.createOrUpdateAzureCDNProfile(cdnProfile);
// Configure custom rules for geospatial content
const customRules = [
{
name: 'TileOptimization',
order: 1,
conditions: [
{
name: 'UrlPath',
parameters: {
operator: 'Contains',
value: '/tiles/'
}
}
],
actions: [
{
name: 'CacheExpiration',
parameters: {
cacheBehavior: 'Override',
cacheType: 'All',
cacheDuration: '1.00:00:00' // 1 day
}
},
{
name: 'ResponseHeader',
parameters: {
headerAction: 'Overwrite',
headerName: 'Cache-Control',
value: 'public, max-age=86400, immutable'
}
}
]
}
];
await this.applyAzureCustomRules(customRules);
}
// Edge computing for dynamic tile generation
generateEdgeWorkerScript(): string {
return `
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request));
});
async function handleRequest(request) {
const url = new URL(request.url);
// Handle tile requests with dynamic generation
if (url.pathname.startsWith('/tiles/')) {
return handleTileRequest(request, url);
}
// Handle API requests with caching
if (url.pathname.startsWith('/api/')) {
return handleAPIRequest(request, url);
}
// Pass through other requests
return fetch(request);
}
async function handleTileRequest(request, url) {
const cacheKey = new Request(url.toString(), request);
const cache = caches.default;
// Check cache first
let response = await cache.match(cacheKey);
if (!response) {
// Generate tile dynamically if not cached
response = await generateTile(url);
// Cache the response
const cacheResponse = response.clone();
cacheResponse.headers.set('Cache-Control', 'public, max-age=86400');
cacheResponse.headers.set('Expires', new Date(Date.now() + 86400000).toUTCString());
event.waitUntil(cache.put(cacheKey, cacheResponse));
}
return response;
}
async function generateTile(url) {
const pathParts = url.pathname.split('/');
const z = parseInt(pathParts[2]);
const x = parseInt(pathParts[3]);
const y = parseInt(pathParts[4].split('.')[0]);
// Simple vector tile generation
const bounds = tileToBounds(x, y, z);
const features = await fetchFeaturesInBounds(bounds);
const vectorTile = {
layers: {
data: {
version: 2,
name: 'data',
extent: 4096,
features: features.map(f => simplifyFeature(f, z))
}
}
};
return new Response(JSON.stringify(vectorTile), {
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
'Cache-Control': 'public, max-age=86400'
}
});
}
async function handleAPIRequest(request, url) {
// Add request optimization for API calls
const optimizedRequest = optimizeAPIRequest(request);
// Check cache based on request parameters
const cacheKey = generateAPICacheKey(url, request);
const cache = caches.default;
let response = await cache.match(cacheKey);
if (!response) {
response = await fetch(optimizedRequest);
if (response.ok && shouldCacheAPIResponse(url)) {
const cacheResponse = response.clone();
cacheResponse.headers.set('Cache-Control', 'public, max-age=300');
event.waitUntil(cache.put(cacheKey, cacheResponse));
}
}
return response;
}
function tileToBounds(x, y, z) {
const n = Math.PI - 2 * Math.PI * y / Math.pow(2, z);
return [
x / Math.pow(2, z) * 360 - 180,
180 / Math.PI * Math.atan(0.5 * (Math.exp(n) - Math.exp(-n))),
(x + 1) / Math.pow(2, z) * 360 - 180,
180 / Math.PI * Math.atan(0.5 * (Math.exp(n - 2 * Math.PI / Math.pow(2, z)) - Math.exp(-n + 2 * Math.PI / Math.pow(2, z))))
];
}
function simplifyFeature(feature, zoom) {
// Simplify geometry based on zoom level
const tolerance = Math.max(0.0001, 0.01 / Math.pow(2, zoom));
return {
...feature,
geometry: simplifyGeometry(feature.geometry, tolerance)
};
}
function optimizeAPIRequest(request) {
// Add compression headers
const headers = new Headers(request.headers);
headers.set('Accept-Encoding', 'gzip, br');
return new Request(request.url, {
method: request.method,
headers: headers,
body: request.body
});
}
function generateAPICacheKey(url, request) {
const key = url.pathname + url.search;
return new Request(key, {
method: request.method,
headers: {'Cache-Key': key}
});
}
function shouldCacheAPIResponse(url) {
// Don't cache user-specific or real-time data
return !url.pathname.includes('/user/') &&
!url.pathname.includes('/realtime/') &&
!url.searchParams.has('nocache');
}
`;
}
// Performance monitoring for CDN
async monitorCDNPerformance(): Promise<any> {
const metrics = {
hitRatio: await this.getCacheHitRatio(),
averageResponseTime: await this.getAverageResponseTime(),
bandwidthUsage: await this.getBandwidthUsage(),
errorRate: await this.getErrorRate(),
topCachedContent: await this.getTopCachedContent()
};
return metrics;
}
// Utility methods
private findMatchingCacheRule(path: string): CacheRule {
for (const rule of this.config.cacheRules) {
if (path.match(new RegExp(rule.path))) {
return rule;
}
}
// Default rule
return {
path: '*',
ttl: 3600,
vary: ['Accept-Encoding'],
browserTtl: 300
};
}
private generateETag(contentType: string, path: string): string {
const crypto = require('crypto');
return crypto
.createHash('md5')
.update(contentType + path + Date.now())
.digest('hex');
}
private async applyCloudflarePageRule(rule: any): Promise<void> {
// Implementation for Cloudflare API calls
console.log('Applying Cloudflare page rule:', rule);
}
private async enableArgoSmartRouting(): Promise<void> {
// Enable Argo Smart Routing via API
console.log('Enabling Argo Smart Routing');
}
private async deployOptimizationWorker(): Promise<void> {
const workerScript = this.generateEdgeWorkerScript();
// Deploy worker via Cloudflare Workers API
console.log('Deploying optimization worker');
}
private getOptimizedCachePolicyId(): string {
return 'managed-caching-optimized';
}
private getSecurityHeadersPolicyId(): string {
return 'managed-security-headers';
}
private getLongTermCachePolicyId(): string {
return 'managed-caching-optimized-for-uncompressed-objects';
}
private getAPIResponseCachePolicyId(): string {
return 'managed-caching-disabled';
}
private async createOrUpdateCloudFrontDistribution(config: any): Promise<void> {
// AWS CloudFront API implementation
console.log('Creating CloudFront distribution:', config);
}
private async createOrUpdateAzureCDNProfile(profile: any): Promise<void> {
// Azure CDN API implementation
console.log('Creating Azure CDN profile:', profile);
}
private async applyAzureCustomRules(rules: any[]): Promise<void> {
// Azure CDN custom rules implementation
console.log('Applying Azure custom rules:', rules);
}
private async getCacheHitRatio(): Promise<number> {
// Implementation depends on CDN provider
return 0.85; // Example value
}
private async getAverageResponseTime(): Promise<number> {
// Implementation depends on CDN provider
return 150; // Example value in ms
}
private async getBandwidthUsage(): Promise<number> {
// Implementation depends on CDN provider
return 1024; // Example value in MB
}
private async getErrorRate(): Promise<number> {
// Implementation depends on CDN provider
return 0.02; // Example value (2%)
}
private async getTopCachedContent(): Promise<string[]> {
// Implementation depends on CDN provider
return ['/tiles/14/8192/8192.json', '/api/features/popular'];
}
}
14.6. Performance Monitoring and Analytics#
14.6.1. Comprehensive Performance Monitoring#
// server/src/services/performanceMonitoringService.ts
interface PerformanceMetric {
timestamp: Date;
metricType: 'response_time' | 'query_duration' | 'cache_hit' | 'memory_usage' | 'cpu_usage';
value: number;
metadata?: Record<string, any>;
tags?: string[];
}
interface PerformanceBudget {
metric: string;
threshold: number;
timeWindow: number; // in seconds
severity: 'warning' | 'critical';
}
export class PerformanceMonitoringService {
private metrics: PerformanceMetric[] = [];
private budgets: PerformanceBudget[] = [];
private alerting: AlertingService;
constructor() {
this.alerting = new AlertingService();
this.setupDefaultBudgets();
this.startContinuousMonitoring();
}
// Record performance metrics
recordMetric(metric: PerformanceMetric): void {
this.metrics.push({
...metric,
timestamp: new Date()
});
// Keep only recent metrics to prevent memory leaks
if (this.metrics.length > 10000) {
this.metrics = this.metrics.slice(-5000);
}
// Check against performance budgets
this.checkPerformanceBudgets(metric);
}
// Express middleware for automatic request monitoring
requestMonitoringMiddleware() {
return (req: any, res: any, next: any) => {
const startTime = process.hrtime.bigint();
const startMemory = process.memoryUsage();
// Monitor request completion
res.on('finish', () => {
const endTime = process.hrtime.bigint();
const endMemory = process.memoryUsage();
const responseTime = Number(endTime - startTime) / 1_000_000; // Convert to milliseconds
const memoryDelta = endMemory.heapUsed - startMemory.heapUsed;
this.recordMetric({
timestamp: new Date(),
metricType: 'response_time',
value: responseTime,
metadata: {
method: req.method,
path: req.path,
statusCode: res.statusCode,
contentLength: res.get('Content-Length'),
userAgent: req.get('User-Agent'),
memoryDelta
},
tags: [
`method:${req.method}`,
`status:${res.statusCode}`,
`path:${req.path.split('/')[1]}`
]
});
// Record memory usage if significant change
if (Math.abs(memoryDelta) > 1024 * 1024) { // 1MB threshold
this.recordMetric({
timestamp: new Date(),
metricType: 'memory_usage',
value: endMemory.heapUsed,
metadata: {
delta: memoryDelta,
request: `${req.method} ${req.path}`
}
});
}
});
next();
};
}
// Database query monitoring
monitorDatabaseQuery<T>(
query: string,
params: any[],
executor: () => Promise<T>
): Promise<T> {
return new Promise(async (resolve, reject) => {
const startTime = process.hrtime.bigint();
try {
const result = await executor();
const endTime = process.hrtime.bigint();
const duration = Number(endTime - startTime) / 1_000_000;
this.recordMetric({
timestamp: new Date(),
metricType: 'query_duration',
value: duration,
metadata: {
query: this.sanitizeQuery(query),
paramCount: params.length,
resultSize: this.estimateResultSize(result)
},
tags: ['database:postgresql']
});
resolve(result);
} catch (error) {
const endTime = process.hrtime.bigint();
const duration = Number(endTime - startTime) / 1_000_000;
this.recordMetric({
timestamp: new Date(),
metricType: 'query_duration',
value: duration,
metadata: {
query: this.sanitizeQuery(query),
error: error instanceof Error ? error.message : 'Unknown error'
},
tags: ['database:postgresql', 'status:error']
});
reject(error);
}
});
}
// Cache performance monitoring
monitorCacheOperation(
operation: 'hit' | 'miss' | 'set' | 'delete',
key: string,
duration?: number
): void {
this.recordMetric({
timestamp: new Date(),
metricType: 'cache_hit',
value: operation === 'hit' ? 1 : 0,
metadata: {
operation,
key: this.sanitizeCacheKey(key),
duration
},
tags: [`cache:${operation}`]
});
}
// Real-time performance dashboard data
getPerformanceDashboard(timeWindow: number = 3600): any {
const since = new Date(Date.now() - timeWindow * 1000);
const recentMetrics = this.metrics.filter(m => m.timestamp >= since);
return {
summary: this.calculateSummaryMetrics(recentMetrics),
responseTime: this.analyzeResponseTimes(recentMetrics),
databasePerformance: this.analyzeDatabasePerformance(recentMetrics),
cachePerformance: this.analyzeCachePerformance(recentMetrics),
memoryUsage: this.analyzeMemoryUsage(recentMetrics),
alerts: this.getActiveAlerts(),
trends: this.calculateTrends(recentMetrics)
};
}
// Performance budgets and alerting
private setupDefaultBudgets(): void {
this.budgets = [
{
metric: 'response_time',
threshold: 500, // 500ms
timeWindow: 300, // 5 minutes
severity: 'warning'
},
{
metric: 'response_time',
threshold: 1000, // 1 second
timeWindow: 300,
severity: 'critical'
},
{
metric: 'query_duration',
threshold: 1000, // 1 second
timeWindow: 300,
severity: 'warning'
},
{
metric: 'memory_usage',
threshold: 512 * 1024 * 1024, // 512MB
timeWindow: 600, // 10 minutes
severity: 'warning'
}
];
}
private checkPerformanceBudgets(metric: PerformanceMetric): void {
const relevantBudgets = this.budgets.filter(b =>
b.metric === metric.metricType
);
for (const budget of relevantBudgets) {
if (metric.value > budget.threshold) {
this.alerting.triggerAlert({
type: 'performance_budget_exceeded',
severity: budget.severity,
metric: budget.metric,
value: metric.value,
threshold: budget.threshold,
metadata: metric.metadata
});
}
}
}
// Continuous monitoring setup
private startContinuousMonitoring(): void {
// Monitor system resources every 30 seconds
setInterval(() => {
const memUsage = process.memoryUsage();
const cpuUsage = process.cpuUsage();
this.recordMetric({
timestamp: new Date(),
metricType: 'memory_usage',
value: memUsage.heapUsed,
metadata: {
heapTotal: memUsage.heapTotal,
external: memUsage.external,
arrayBuffers: memUsage.arrayBuffers
}
});
this.recordMetric({
timestamp: new Date(),
metricType: 'cpu_usage',
value: cpuUsage.user + cpuUsage.system,
metadata: {
user: cpuUsage.user,
system: cpuUsage.system
}
});
}, 30000);
// Generate performance reports every 5 minutes
setInterval(() => {
const report = this.generatePerformanceReport();
console.log('Performance Report:', report);
// Send to external monitoring service if configured
this.sendToExternalMonitoring(report);
}, 300000);
}
// Analysis methods
private calculateSummaryMetrics(metrics: PerformanceMetric[]): any {
const responseTimeMetrics = metrics.filter(m => m.metricType === 'response_time');
const queryMetrics = metrics.filter(m => m.metricType === 'query_duration');
const cacheMetrics = metrics.filter(m => m.metricType === 'cache_hit');
return {
totalRequests: responseTimeMetrics.length,
averageResponseTime: this.calculateAverage(responseTimeMetrics.map(m => m.value)),
p95ResponseTime: this.calculatePercentile(responseTimeMetrics.map(m => m.value), 95),
averageQueryTime: this.calculateAverage(queryMetrics.map(m => m.value)),
cacheHitRate: cacheMetrics.length > 0
? cacheMetrics.filter(m => m.value === 1).length / cacheMetrics.length
: 0
};
}
private analyzeResponseTimes(metrics: PerformanceMetric[]): any {
const responseTimeMetrics = metrics.filter(m => m.metricType === 'response_time');
if (responseTimeMetrics.length === 0) return null;
const byPath = this.groupBy(responseTimeMetrics, m => m.metadata?.path || 'unknown');
const pathAnalysis = Object.entries(byPath).map(([path, pathMetrics]) => ({
path,
count: pathMetrics.length,
average: this.calculateAverage(pathMetrics.map(m => m.value)),
p95: this.calculatePercentile(pathMetrics.map(m => m.value), 95),
max: Math.max(...pathMetrics.map(m => m.value))
}));
return {
overall: {
count: responseTimeMetrics.length,
average: this.calculateAverage(responseTimeMetrics.map(m => m.value)),
p50: this.calculatePercentile(responseTimeMetrics.map(m => m.value), 50),
p95: this.calculatePercentile(responseTimeMetrics.map(m => m.value), 95),
p99: this.calculatePercentile(responseTimeMetrics.map(m => m.value), 99)
},
byPath: pathAnalysis.sort((a, b) => b.average - a.average).slice(0, 10)
};
}
private analyzeDatabasePerformance(metrics: PerformanceMetric[]): any {
const queryMetrics = metrics.filter(m => m.metricType === 'query_duration');
if (queryMetrics.length === 0) return null;
const slowQueries = queryMetrics.filter(m => m.value > 1000); // Slower than 1 second
const errorQueries = queryMetrics.filter(m => m.metadata?.error);
return {
totalQueries: queryMetrics.length,
averageDuration: this.calculateAverage(queryMetrics.map(m => m.value)),
slowQueryCount: slowQueries.length,
errorCount: errorQueries.length,
slowQueries: slowQueries.slice(-5).map(m => ({
duration: m.value,
query: m.metadata?.query,
timestamp: m.timestamp
}))
};
}
private analyzeCachePerformance(metrics: PerformanceMetric[]): any {
const cacheMetrics = metrics.filter(m => m.metricType === 'cache_hit');
if (cacheMetrics.length === 0) return null;
const hits = cacheMetrics.filter(m => m.value === 1);
const misses = cacheMetrics.filter(m => m.value === 0);
return {
totalOperations: cacheMetrics.length,
hitCount: hits.length,
missCount: misses.length,
hitRate: hits.length / cacheMetrics.length,
operationsByType: this.groupBy(cacheMetrics, m => m.metadata?.operation || 'unknown')
};
}
private analyzeMemoryUsage(metrics: PerformanceMetric[]): any {
const memoryMetrics = metrics.filter(m => m.metricType === 'memory_usage');
if (memoryMetrics.length === 0) return null;
const values = memoryMetrics.map(m => m.value);
const latest = memoryMetrics[memoryMetrics.length - 1];
return {
current: latest.value,
average: this.calculateAverage(values),
max: Math.max(...values),
min: Math.min(...values),
trend: this.calculateTrend(values.slice(-10)) // Last 10 readings
};
}
private getActiveAlerts(): any[] {
// Return active alerts from alerting service
return this.alerting.getActiveAlerts();
}
private calculateTrends(metrics: PerformanceMetric[]): any {
const responseTimeMetrics = metrics.filter(m => m.metricType === 'response_time');
const queryMetrics = metrics.filter(m => m.metricType === 'query_duration');
return {
responseTime: this.calculateTrend(responseTimeMetrics.map(m => m.value)),
queryDuration: this.calculateTrend(queryMetrics.map(m => m.value))
};
}
// Utility methods
private calculateAverage(values: number[]): number {
return values.length > 0 ? values.reduce((sum, val) => sum + val, 0) / values.length : 0;
}
private calculatePercentile(values: number[], percentile: number): number {
if (values.length === 0) return 0;
const sorted = [...values].sort((a, b) => a - b);
const index = Math.ceil((percentile / 100) * sorted.length) - 1;
return sorted[Math.max(0, index)];
}
private calculateTrend(values: number[]): 'increasing' | 'decreasing' | 'stable' {
if (values.length < 2) return 'stable';
const firstHalf = values.slice(0, Math.floor(values.length / 2));
const secondHalf = values.slice(Math.floor(values.length / 2));
const firstAvg = this.calculateAverage(firstHalf);
const secondAvg = this.calculateAverage(secondHalf);
const changePercent = Math.abs((secondAvg - firstAvg) / firstAvg) * 100;
if (changePercent < 5) return 'stable';
return secondAvg > firstAvg ? 'increasing' : 'decreasing';
}
private groupBy<T>(array: T[], keyFn: (item: T) => string): Record<string, T[]> {
return array.reduce((groups, item) => {
const key = keyFn(item);
if (!groups[key]) {
groups[key] = [];
}
groups[key].push(item);
return groups;
}, {} as Record<string, T[]>);
}
private sanitizeQuery(query: string): string {
// Remove sensitive data from query strings for logging
return query
.replace(/\$\d+/g, '?') // Replace parameter placeholders
.substring(0, 100); // Limit length
}
private sanitizeCacheKey(key: string): string {
// Remove sensitive data from cache keys
return key.substring(0, 50);
}
private estimateResultSize(result: any): number {
try {
return JSON.stringify(result).length;
} catch {
return 0;
}
}
private generatePerformanceReport(): any {
const dashboard = this.getPerformanceDashboard();
return {
timestamp: new Date(),
summary: dashboard.summary,
issues: this.identifyPerformanceIssues(dashboard),
recommendations: this.generateRecommendations(dashboard)
};
}
private identifyPerformanceIssues(dashboard: any): string[] {
const issues: string[] = [];
if (dashboard.summary.averageResponseTime > 500) {
issues.push('High average response time detected');
}
if (dashboard.summary.cacheHitRate < 0.8) {
issues.push('Low cache hit rate detected');
}
if (dashboard.databasePerformance?.slowQueryCount > 0) {
issues.push('Slow database queries detected');
}
return issues;
}
private generateRecommendations(dashboard: any): string[] {
const recommendations: string[] = [];
if (dashboard.summary.averageResponseTime > 500) {
recommendations.push('Consider implementing response caching or optimizing slow endpoints');
}
if (dashboard.summary.cacheHitRate < 0.8) {
recommendations.push('Review cache invalidation strategy and increase cache TTL where appropriate');
}
return recommendations;
}
private sendToExternalMonitoring(report: any): void {
// Send to external monitoring services (DataDog, New Relic, etc.)
console.log('Sending performance report to external monitoring');
}
}
// Alerting service for performance issues
class AlertingService {
private activeAlerts: any[] = [];
triggerAlert(alert: any): void {
this.activeAlerts.push({
...alert,
id: this.generateAlertId(),
timestamp: new Date(),
status: 'active'
});
// Send notifications (email, Slack, etc.)
this.sendNotification(alert);
}
getActiveAlerts(): any[] {
return this.activeAlerts.filter(alert => alert.status === 'active');
}
private generateAlertId(): string {
return Math.random().toString(36).substring(2, 15);
}
private sendNotification(alert: any): void {
// Implementation for sending notifications
console.log('Sending alert notification:', alert);
}
}
14.7. Summary#
Performance optimization in Web GIS applications requires a comprehensive approach addressing frontend rendering, backend processing, network delivery, and continuous monitoring. The unique challenges of geospatial data—large datasets, complex geometries, and real-time requirements—demand specialized optimization strategies beyond traditional web application techniques.
Frontend optimization focuses on efficient rendering through level-of-detail management, progressive loading, feature clustering, and memory management. Component-level optimizations using React memoization, throttled updates, and virtualized rendering ensure smooth user interactions even with large datasets.
Backend optimization emphasizes spatial indexing, query optimization, and multi-level caching strategies. Database partitioning, materialized views, and optimized spatial queries provide the foundation for high-performance data access. Caching strategies spanning memory, Redis, and CDN layers ensure optimal data delivery across different access patterns.
CDN and edge computing configurations enable global performance optimization through strategic content placement, dynamic tile generation, and intelligent caching rules. Edge workers provide opportunities for real-time data processing closer to users, reducing latency and improving responsiveness.
Comprehensive performance monitoring provides the insights needed to identify bottlenecks, track performance trends, and ensure applications meet performance budgets. Real-time metrics, automated alerting, and detailed analysis enable proactive performance management and continuous optimization.
These optimization strategies form the foundation for deploying scalable Web GIS applications that provide excellent user experiences regardless of data complexity or user location.
14.8. Exercises#
14.8.1. Exercise 14.1: Frontend Performance Optimization#
Objective: Implement comprehensive frontend performance optimizations for a mapping application with large datasets.
Instructions:
Level-of-detail rendering system:
Implement dynamic geometry simplification based on zoom level
Create feature filtering based on importance and viewport
Build progressive loading with user feedback
Add memory management and cleanup routines
Component optimization:
Implement React.memo for expensive map components
Add throttled viewport change handlers
Create virtualized layer rendering for large layer counts
Build performance monitoring components
Rendering optimization:
Implement efficient clustering for point data
Add WebGL-based rendering where appropriate
Create efficient update mechanisms for dynamic data
Optimize tile loading and caching strategies
Deliverable: A highly optimized mapping component with measurable performance improvements and built-in monitoring.
14.8.2. Exercise 14.2: Database and Backend Optimization#
Objective: Optimize database queries and backend processing for spatial data operations.
Instructions:
Spatial indexing strategy:
Create appropriate spatial indexes for different query patterns
Implement table partitioning for large datasets
Build materialized views for common aggregations
Add query optimization monitoring
Query optimization:
Implement efficient bounding box queries
Create optimized nearest neighbor searches
Build clustering queries for map visualization
Add query performance monitoring and alerting
Backend processing optimization:
Implement efficient data transformation pipelines
Create background processing for expensive operations
Build request batching and optimization
Add comprehensive error handling and recovery
Deliverable: An optimized backend system with significant query performance improvements and monitoring capabilities.
14.8.3. Exercise 14.3: Multi-Level Caching Implementation#
Objective: Design and implement a comprehensive caching strategy spanning multiple layers.
Instructions:
Memory caching layer:
Implement LRU cache for frequently accessed data
Add cache warming strategies
Create efficient cache invalidation mechanisms
Build cache performance monitoring
Redis distributed caching:
Implement spatial-aware cache keys
Create efficient batch operations
Add compression for large cache entries
Build cache clustering for high availability
CDN and edge caching:
Configure CDN for optimal geospatial content delivery
Implement edge workers for dynamic content generation
Create intelligent cache invalidation strategies
Add CDN performance monitoring and optimization
Deliverable: A comprehensive multi-level caching system with measurable performance improvements and monitoring.
14.8.4. Exercise 14.4: CDN and Edge Computing Setup#
Objective: Configure and optimize CDN infrastructure for global Web GIS application delivery.
Instructions:
CDN configuration:
Set up CDN with optimized cache rules for different content types
Configure compression and optimization settings
Implement geographic content distribution
Add security and performance headers
Edge computing implementation:
Deploy edge workers for dynamic tile generation
Implement API optimization at the edge
Create intelligent request routing
Add edge-side caching and optimization
Global optimization:
Configure multi-region deployment
Implement intelligent traffic routing
Add performance monitoring across regions
Create automated failover and recovery
Deliverable: A globally optimized CDN infrastructure with edge computing capabilities and comprehensive monitoring.
14.8.5. Exercise 14.5: Performance Monitoring and Analytics#
Objective: Implement comprehensive performance monitoring with automated alerting and optimization recommendations.
Instructions:
Metrics collection:
Implement automatic request and response time monitoring
Add database query performance tracking
Create cache performance analytics
Build system resource monitoring
Performance budgets and alerting:
Define performance budgets for different application components
Implement automated alerting for budget violations
Create escalation and notification systems
Add trend analysis and prediction
Analytics and optimization:
Build real-time performance dashboards
Implement automated performance analysis
Create optimization recommendations
Add capacity planning and scaling insights
Deliverable: A comprehensive performance monitoring system with automated alerting and optimization recommendations.
14.8.6. Exercise 14.6: Load Testing and Optimization#
Objective: Conduct comprehensive load testing and implement optimizations based on results.
Instructions:
Load testing framework:
Design realistic load testing scenarios for Web GIS applications
Implement automated load testing with various user patterns
Create geographic distribution testing
Add performance baseline establishment
Bottleneck identification:
Analyze load testing results to identify bottlenecks
Create detailed performance profiling
Implement automated bottleneck detection
Add root cause analysis capabilities
Optimization implementation:
Implement targeted optimizations based on testing results
Create before/after performance comparisons
Add automated optimization testing
Build continuous performance improvement processes
Deliverable: A comprehensive load testing framework with automated optimization identification and implementation.
14.8.7. Exercise 14.7: Progressive Web App Optimization#
Objective: Optimize a Web GIS application as a Progressive Web App with offline capabilities.
Instructions:
Service worker implementation:
Implement intelligent caching strategies for map tiles and data
Create offline functionality for critical features
Add background synchronization for data updates
Build efficient cache management and cleanup
Progressive enhancement:
Implement adaptive loading based on connection quality
Create fallback strategies for poor connectivity
Add progressive image and tile loading
Build intelligent prefetching and preloading
Performance optimization for mobile:
Optimize touch interactions and gestures
Implement efficient memory management for mobile devices
Create adaptive quality settings based on device capabilities
Add battery usage optimization
Deliverable: A fully optimized Progressive Web App with excellent offline capabilities and mobile performance.
Reflection Questions:
How do the performance requirements of Web GIS applications differ from traditional web applications?
What are the most effective caching strategies for different types of geospatial data?
How can edge computing improve the global performance of Web GIS applications?
What metrics are most important for monitoring Web GIS application performance?