Repository for Reference: https://github.com/Jain1shh/HLS-Video-Streaming-Platform
Building a video streaming platform might seem daunting, but with the right combination of technologies, it becomes surprisingly manageable. In this post, I'll walk you through how I built a production-ready HLS video streaming platform using Java 21, Spring Boot 3, and FFmpeg.
What is HLS and Why Should You Care?
HTTP Live Streaming (HLS) is Apple's adaptive bitrate streaming protocol that has become the industry standard for video delivery. Instead of serving one massive video file, HLS breaks your content into small, manageable segments (typically 6-10 seconds each) and creates a playlist file that tells video players how to stitch everything back together.
Why HLS Dominates Video Streaming
- Adaptive Streaming: Automatically adjusts video quality based on user's network conditions
- Universal Compatibility: Works seamlessly across all modern browsers, mobile devices, and smart TVs
- CDN-Friendly: Small segments can be cached and distributed efficiently worldwide
- Firewall-Friendly: Uses standard HTTP, so it works through corporate firewalls
- Resilient Playback: If one segment fails to load, playback continues with the next one
The Architecture: How Each Component Fits
FFmpeg: The Video Processing Powerhouse
FFmpeg acts as our video conversion engine. It takes uploaded video files and transforms them into HLS-compatible segments. Think of it as a Swiss Army knife for video processing - it can handle virtually any video format and convert it to the standardized HLS format that all players understand.
Java 21 + Spring Boot 3: The Application Backbone
Our Java application serves as the orchestrator. It handles file uploads, manages FFmpeg processing, stores video metadata, and serves the streaming content. Spring Boot provides the REST API framework, while Java 21 gives us the latest performance improvements and language features.
FFmpeg HLS Processing: Under the Hood
Our application uses this carefully crafted FFmpeg command:
ffmpeg -i "input_video.mp4" -codec: copy -start_number 0 -hls_time 10 -hls_list_size 0 -f hls "output_playlist.m3u8"
Command Breakdown
Parameter | Description |
---|---|
-i "input_video.mp4" |
Input video file path |
-codec: copy |
Copy streams without re-encoding (preserves quality, saves time) |
-start_number 0 |
Start segment numbering from 0 |
-hls_time 10 |
Each segment duration is 10 seconds |
-hls_list_size 0 |
Keep all segments in playlist (unlimited) |
-f hls |
Output format is HLS |
"output_playlist.m3u8" |
Output playlist file name |
The Processing Magic
1. Segmentation FFmpeg splits your video into digestible 10-second chunks:
video_segment_0.ts
video_segment_1.ts
video_segment_2.ts
...
2. Playlist Creation It generates an M3U8 playlist file that looks like this:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10.0,
video_segment_0.ts
#EXTINF:10.0,
video_segment_1.ts
#EXTINF:10.0,
video_segment_2.ts
#EXT-X-ENDLIST
Adaptive bitrate example (more complex but supports multiple video resolution) - which i have implemented in my project
For adaptive bitrate streaming with multiple qualities: below command takes original video file and converts that into multiple resolution playlists and generates one master.m3u8 file which keeps track of all resolution playlists
ffmpeg -i input.mp4 \
-map 0:v -map 0:a -map 0:v -map 0:a \
-c:v:0 libx264 -b:v:0 2M -s:v:0 1280x720 \
-c:v:1 libx264 -b:v:1 500k -s:v:1 640x360 \
-c:a copy \
-f hls -hls_time 10 -hls_list_size 0 \
-master_pl_name master.m3u8 \
-var_stream_map "v:0,a:0 v:1,a:1" output_%v.m3u8
Performance Considerations: The Resource Reality
Here's something important to understand - FFmpeg is resource-intensive. During video processing, you'll see:
- CPU Usage: Can spike to 80-100% on a single core
- Memory Consumption: 6GB-9GB+ depending on video resolution and length
- I/O Load: Heavy disk read/write operations
For production environments, consider: - Processing videos in a queue system - Using dedicated processing servers - Implementing upload size limits - Adding progress monitoring
The Spring Boot API: Bringing It All Together
Our Maven project exposes three main endpoints that handle the complete video streaming workflow:
1. Video Upload Endpoint
POST /api/videos/upload
This endpoint accepts multipart form data with three parameters:
- file
: The video file to upload
- title
: Video title for metadata
- description
: Video description
The endpoint triggers asynchronous FFmpeg processing and returns immediately with a video ID, allowing users to continue while processing happens in the background.
2. Master Playlist Endpoint
GET /api/videos/stream/{videoId}
Returns the main M3U8 playlist file that video players use to initiate streaming. This endpoint sets the correct MIME type (application/vnd.apple.mpegurl
) so browsers know how to handle the response.
3. Segment Streaming Endpoint
GET /api/videos/stream/{videoId}/**
This is where the magic happens. The wildcard pattern captures requests for individual video segments (.ts files) and additional playlist files. The endpoint:
- Dynamically resolves file paths within the video directory
- Sets appropriate content types (video/MP2T for .ts files)
- Serves files with proper streaming headers
File Structure After Processing (without adaptive bit rates)
After FFmpeg completes its work, your video directory looks like this:
videos/{videoId}/
├── playlist.m3u8 # Main playlist file
├── video_segment_0.ts # First 10-second segment
├── video_segment_1.ts # Second segment
├── video_segment_2.ts # Third segment
└── ... # Additional segments
How Video Players Consume HLS
The streaming process follows this elegant sequence:
- Player requests the M3U8 playlist
- Parses segment list and determines playback order
- Downloads segments sequentially, maintaining a buffer
- Plays segments seamlessly while downloading upcoming ones
- Handles network changes by requesting different quality segments (in adaptive implementations)
Advantages of Our Implementation
Speed: Using -codec: copy
means no re-encoding, dramatically reducing processing time from minutes to seconds.
Compatibility: Standard HLS format works across all devices and browsers without additional plugins.
Simplicity: Single-quality stream implementation is straightforward and reliable.
Resource Efficiency: Stream copying uses minimal CPU compared to transcoding.
Limitations and Trade-offs
Codec Dependency: Output quality depends entirely on the input video's codec and bitrate.
Storage Requirements: HLS creates multiple files, increasing storage overhead compared to single-file streaming.
Production Considerations
For a production deployment, consider these enhancements:
Queue System: Process videos asynchronously using Redis or RabbitMQ to handle concurrent uploads.
CDN Integration: Serve segments through a CDN for global distribution and reduced server load.
Progress Tracking: Implement WebSocket or polling to show processing progress to users.
Multiple Qualities: Add adaptive bitrate streaming for different network conditions.
Getting Started
The complete implementation is available in the GitHub repository. Clone it, run mvn spring-boot:run
, and you'll have a working HLS streaming platform in minutes.
This architecture provides a solid foundation for video streaming applications, from simple proof-of-concepts to production services handling thousands of users. The combination of Java's robustness, Spring Boot's simplicity, and FFmpeg's power creates a surprisingly capable streaming platform.
Have questions about the implementation or want to suggest improvements? Feel free to open an issue in the repository or reach out!