deenasun commited on
Commit
03ba989
·
1 Parent(s): dadcb61

add two input options and R2 cloud upload-download

Browse files
README.md CHANGED
@@ -16,23 +16,33 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
16
 
17
  Convert text documents to American Sign Language (ASL) videos using AI.
18
 
19
- ## Video Output Options
 
 
 
 
 
 
 
 
 
 
20
 
21
  The Gradio interface provides multiple ways for users to receive and download the generated ASL videos:
22
 
23
- ### 1. R2 Cloud Storage (Recommended)
24
  - Videos are automatically uploaded to Cloudflare R2 storage
25
  - Returns a public URL that users can download directly
26
  - Videos persist and can be shared via URL
27
  - Includes a styled download button in the interface
28
 
29
- ### 2. Base64 Encoding (Alternative)
30
  - Videos are embedded as base64 data directly in the response
31
  - No external storage required
32
  - Good for smaller videos or when you want to avoid cloud storage
33
  - Can be downloaded directly from the interface
34
 
35
- ### 3. Programmatic Access
36
  Users can access the video output programmatically using:
37
 
38
  ```python
@@ -57,18 +67,64 @@ with open("asl_video.mp4", "wb") as f:
57
  f.write(response.content)
58
  ```
59
 
60
- ### 4. Direct Download from Interface
61
  - The interface includes a styled download button
62
  - Users can right-click and "Save As" if automatic download doesn't work
63
  - Video files are named `asl_video.mp4` by default
64
 
65
  ## Example Usage
66
 
67
- See `example_usage.py` for complete examples of how to:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  - Download videos from URLs
69
  - Process base64 video data
70
  - Use the interface programmatically
71
  - Perform further video processing
 
 
72
 
73
  ## Requirements
74
 
@@ -91,3 +147,13 @@ Once you have the video file, you can:
91
  - Convert to different formats
92
  - Extract frames for further processing
93
  - Add subtitles or overlays
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  Convert text documents to American Sign Language (ASL) videos using AI.
18
 
19
+ ## Features
20
+
21
+ ### Dual Input Support with Optional File Upload
22
+ The app accepts both text input and file uploads with flexible options:
23
+
24
+ - **Text Input**: Type or paste text directly into the interface (always available)
25
+ - **File Upload**: Upload documents (PDF, TXT, DOCX, EPUB) - **optional, can be enabled/disabled**
26
+ - **Smart Priority**: Text input takes priority if both are provided
27
+ - **Toggle Control**: Checkbox to enable/disable file upload functionality
28
+
29
+ ### Video Output Options
30
 
31
  The Gradio interface provides multiple ways for users to receive and download the generated ASL videos:
32
 
33
+ #### 1. R2 Cloud Storage (Recommended)
34
  - Videos are automatically uploaded to Cloudflare R2 storage
35
  - Returns a public URL that users can download directly
36
  - Videos persist and can be shared via URL
37
  - Includes a styled download button in the interface
38
 
39
+ #### 2. Base64 Encoding (Alternative)
40
  - Videos are embedded as base64 data directly in the response
41
  - No external storage required
42
  - Good for smaller videos or when you want to avoid cloud storage
43
  - Can be downloaded directly from the interface
44
 
45
+ #### 3. Programmatic Access
46
  Users can access the video output programmatically using:
47
 
48
  ```python
 
67
  f.write(response.content)
68
  ```
69
 
70
+ #### 4. Direct Download from Interface
71
  - The interface includes a styled download button
72
  - Users can right-click and "Save As" if automatic download doesn't work
73
  - Video files are named `asl_video.mp4` by default
74
 
75
  ## Example Usage
76
 
77
+ ### Web Interface
78
+ 1. Visit your Space URL
79
+ 2. Choose input method:
80
+ - **Text**: Type or paste text in the text box (always available)
81
+ - **File**: Check "Enable file upload" and upload a document (optional)
82
+ 3. Click "Generate ASL Video"
83
+ 4. Download the resulting video
84
+
85
+ ### Programmatic Access with Optional File Upload
86
+
87
+ ```python
88
+ from gradio_client import Client
89
+
90
+ # Connect to your hosted app
91
+ client = Client("https://huggingface.co/spaces/your-username/your-space")
92
+
93
+ # Text input only (file upload disabled)
94
+ result = client.predict(
95
+ "Hello world! This is a test.", # Text input
96
+ False, # Enable file upload (False = disabled)
97
+ None, # File input (None since disabled)
98
+ True, # Use R2 storage
99
+ api_name="/predict"
100
+ )
101
+
102
+ # File input only (file upload enabled)
103
+ result = client.predict(
104
+ "", # Text input (empty)
105
+ True, # Enable file upload (True = enabled)
106
+ "document.pdf", # File input
107
+ True, # Use R2 storage
108
+ api_name="/predict"
109
+ )
110
+
111
+ # Both inputs (text takes priority)
112
+ result = client.predict(
113
+ "Quick text", # Text input
114
+ True, # Enable file upload (True = enabled)
115
+ "document.pdf", # File input
116
+ True, # Use R2 storage
117
+ api_name="/predict"
118
+ )
119
+ ```
120
+
121
+ See `example_usage.py`, `example_usage_dual_input.py`, and `example_optional_file_upload.py` for complete examples of how to:
122
  - Download videos from URLs
123
  - Process base64 video data
124
  - Use the interface programmatically
125
  - Perform further video processing
126
+ - Handle both text and file inputs
127
+ - Use optional file upload functionality
128
 
129
  ## Requirements
130
 
 
147
  - Convert to different formats
148
  - Extract frames for further processing
149
  - Add subtitles or overlays
150
+
151
+ ## Deployment to Hugging Face Spaces
152
+
153
+ 1. Create a new Space on Hugging Face
154
+ 2. Choose Gradio as the SDK
155
+ 3. Upload your code files
156
+ 4. Set environment variables in Space settings
157
+ 5. Deploy and share your Space URL
158
+
159
+ Your app will be accessible to users worldwide with flexible input options!
__pycache__/app.cpython-311.pyc ADDED
Binary file (18.8 kB). View file
 
app.py CHANGED
@@ -17,13 +17,17 @@ import base64
17
  load_dotenv()
18
 
19
  # Load R2/S3 environment secrets
 
20
  R2_ENDPOINT = os.environ.get("R2_ENDPOINT")
21
  R2_ACCESS_KEY_ID = os.environ.get("R2_ACCESS_KEY_ID")
22
  R2_SECRET_ACCESS_KEY = os.environ.get("R2_SECRET_ACCESS_KEY")
23
 
24
  # Validate that required environment variables are set
25
- if not all([R2_ENDPOINT, R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY]):
26
- raise ValueError("Missing required R2 environment variables. Please check your .env file.")
 
 
 
27
 
28
  title = "AI-SL"
29
  description = "Convert text to ASL!"
@@ -61,7 +65,7 @@ def clean_gloss_token(token):
61
  return cleaned
62
 
63
 
64
- def upload_video_to_r2(video_path, bucket_name="ai-sl-videos"):
65
  """
66
  Upload a video file to R2 and return a public URL
67
  """
@@ -79,10 +83,14 @@ def upload_video_to_r2(video_path, bucket_name="ai-sl-videos"):
79
  ExtraArgs={'ACL': 'public-read'}
80
  )
81
 
82
- # Generate the public URL
83
- video_url = f"{R2_ENDPOINT}/{bucket_name}/{unique_filename}"
 
 
84
  print(f"Video uploaded to R2: {video_url}")
85
- return video_url
 
 
86
 
87
  except Exception as e:
88
  print(f"Error uploading video to R2: {e}")
@@ -142,9 +150,68 @@ def cleanup_temp_video(file_path):
142
  print(f"Error cleaning up file: {e}")
143
 
144
 
145
- async def parse_vectorize_and_search(file):
146
- print(file)
147
- gloss = asl_converter.convert_document(file)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
  print("ASL", gloss)
149
 
150
  # Split by spaces and clean each token
@@ -184,7 +251,9 @@ async def parse_vectorize_and_search(file):
184
  if len(video_files) > 1:
185
  try:
186
  print(f"Creating stitched video from {len(video_files)} videos...")
187
- stitched_video_path = tempfile.NamedTemporaryFile(delete=False, suffix='.mp4').name
 
 
188
  create_multi_stitched_video(video_files, stitched_video_path)
189
  print(f"Stitched video created: {stitched_video_path}")
190
  except Exception as e:
@@ -234,114 +303,119 @@ async def parse_vectorize_and_search(file):
234
  "final_video_url": final_video_url
235
  }, final_video_url, download_html
236
 
237
- # Create a synchronous wrapper for Gradio
238
- def parse_vectorize_and_search_sync(file):
239
- return asyncio.run(parse_vectorize_and_search(file))
240
 
 
 
241
 
242
- async def parse_vectorize_and_search_base64(file):
 
243
  """
244
- Alternative version that returns video as base64 data instead of uploading to R2
245
  """
246
- print(file)
247
- gloss = asl_converter.convert_document(file)
248
- print("ASL", gloss)
249
-
250
- # Split by spaces and clean each token
251
- gloss_tokens = gloss.split()
252
- cleaned_tokens = []
253
-
254
- for token in gloss_tokens:
255
- cleaned = clean_gloss_token(token)
256
- if cleaned: # Only add non-empty tokens
257
- cleaned_tokens.append(cleaned)
258
-
259
- print("Cleaned tokens:", cleaned_tokens)
 
 
 
260
 
261
- videos = []
262
- video_files = [] # Store local file paths for stitching
 
 
263
 
264
- for g in cleaned_tokens:
265
- print(f"Processing {g}")
266
- try:
267
- result = await vectorizer.vector_query_from_supabase(query=g)
268
- print("result", result)
269
- if result.get("match", False):
270
- video_url = result["video_url"]
271
- videos.append(video_url)
272
 
273
- # Download the video
274
- local_path = download_video_from_url(video_url)
275
- if local_path:
276
- video_files.append(local_path)
277
-
278
- except Exception as e:
279
- print(f"Error processing {g}: {e}")
280
- continue
281
-
282
- # Create stitched video if we have multiple videos
283
- stitched_video_path = None
284
- if len(video_files) > 1:
285
- try:
286
- print(f"Creating stitched video from {len(video_files)} videos...")
287
- stitched_video_path = tempfile.NamedTemporaryFile(delete=False, suffix='.mp4').name
288
- create_multi_stitched_video(video_files, stitched_video_path)
289
- print(f"Stitched video created: {stitched_video_path}")
290
- except Exception as e:
291
- print(f"Error creating stitched video: {e}")
292
- stitched_video_path = None
293
- elif len(video_files) == 1:
294
- # If only one video, just use it directly
295
- stitched_video_path = video_files[0]
296
-
297
- # Convert final video to base64
298
- final_video_base64 = None
299
- if stitched_video_path:
300
- final_video_base64 = video_to_base64(stitched_video_path)
301
- # Clean up the local file after conversion
302
- cleanup_temp_video(stitched_video_path)
303
-
304
- # Clean up individual video files after stitching
305
- for video_file in video_files:
306
- if video_file != stitched_video_path: # Don't delete the final output
307
- cleanup_temp_video(video_file)
308
-
309
- # Create download link HTML for base64
310
- download_html = ""
311
- if final_video_base64:
312
- download_html = f"""
313
- <div style="text-align: center; padding: 20px;">
314
- <h3>Download Your ASL Video</h3>
315
- <a href="{final_video_base64}" download="asl_video.mp4"
316
- style="background-color: #4CAF50; color: white;
317
- padding: 12px 24px; text-decoration: none;
318
- border-radius: 4px; display: inline-block;">
319
- Download Video
320
- </a>
321
- <p style="margin-top: 10px; color: #666;">
322
- <small>Video is embedded directly - click to download</small>
323
- </p>
324
- </div>
325
- """
326
 
327
- return {
328
- "status": "success",
329
- "videos": videos,
330
- "video_count": len(videos),
331
- "gloss": gloss,
332
- "cleaned_tokens": cleaned_tokens,
333
- "video_format": "base64"
334
- }, final_video_base64, download_html
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
335
 
336
- def parse_vectorize_and_search_base64_sync(file):
337
- return asyncio.run(parse_vectorize_and_search_base64(file))
338
 
339
- intf = gr.Interface(
340
- fn=parse_vectorize_and_search_sync,
341
- inputs=inputs,
342
- outputs=outputs,
343
- title=title,
344
- description=description,
345
- article=article
346
- )
347
- intf.launch(share=True)
 
17
  load_dotenv()
18
 
19
  # Load R2/S3 environment secrets
20
+ R2_ASL_VIDEOS_URL = os.environ.get("R2_ASL_VIDEOS_URL")
21
  R2_ENDPOINT = os.environ.get("R2_ENDPOINT")
22
  R2_ACCESS_KEY_ID = os.environ.get("R2_ACCESS_KEY_ID")
23
  R2_SECRET_ACCESS_KEY = os.environ.get("R2_SECRET_ACCESS_KEY")
24
 
25
  # Validate that required environment variables are set
26
+ if not all([R2_ASL_VIDEOS_URL, R2_ENDPOINT, R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY]):
27
+ raise ValueError(
28
+ "Missing required R2 environment variables. "
29
+ "Please check your .env file."
30
+ )
31
 
32
  title = "AI-SL"
33
  description = "Convert text to ASL!"
 
65
  return cleaned
66
 
67
 
68
+ def upload_video_to_r2(video_path, bucket_name="asl-videos"):
69
  """
70
  Upload a video file to R2 and return a public URL
71
  """
 
83
  ExtraArgs={'ACL': 'public-read'}
84
  )
85
 
86
+ # Replace the endpoint with the domain for uploading
87
+ public_domain = R2_ENDPOINT.replace('https://', '').split('.')[0]
88
+ video_url = f"https://{public_domain}.r2.cloudflarestorage.com/{bucket_name}/{unique_filename}"
89
+
90
  print(f"Video uploaded to R2: {video_url}")
91
+ public_video_url = f"{R2_ASL_VIDEOS_URL}/{unique_filename}"
92
+
93
+ return public_video_url
94
 
95
  except Exception as e:
96
  print(f"Error uploading video to R2: {e}")
 
150
  print(f"Error cleaning up file: {e}")
151
 
152
 
153
+ def process_text_to_gloss(text):
154
+ """
155
+ Convert text directly to ASL gloss
156
+ """
157
+ try:
158
+ # For text input, we can use a simpler approach or call the
159
+ # document converter with a temporary text file
160
+ import tempfile
161
+
162
+ # Create a temporary text file
163
+ with tempfile.NamedTemporaryFile(
164
+ mode='w', suffix='.txt', delete=False
165
+ ) as temp_file:
166
+ temp_file.write(text)
167
+ temp_file_path = temp_file.name
168
+
169
+ # Use the existing document converter
170
+ gloss = asl_converter.convert_document(temp_file_path)
171
+
172
+ # Clean up the temporary file
173
+ os.unlink(temp_file_path)
174
+
175
+ return gloss
176
+ except Exception as e:
177
+ print(f"Error processing text: {e}")
178
+ return None
179
+
180
+
181
+ def process_input(input_data):
182
+ """
183
+ Process either text input or file upload
184
+ input_data can be either a string (text) or a file object
185
+ """
186
+ if input_data is None:
187
+ return None
188
+
189
+ # Check if it's a file object (has .name attribute)
190
+ if hasattr(input_data, 'name'):
191
+ # It's a file upload
192
+ print(f"Processing file: {input_data.name}")
193
+ return asl_converter.convert_document(input_data.name)
194
+ else:
195
+ # It's text input
196
+ print(f"Processing text input: "
197
+ f"{input_data[:100]}...")
198
+ return process_text_to_gloss(input_data)
199
+
200
+
201
+ async def parse_vectorize_and_search_unified(input_data):
202
+ """
203
+ Unified function that handles both text and file inputs
204
+ """
205
+ print(f"Input type: {type(input_data)}")
206
+
207
+ # Process the input to get gloss
208
+ gloss = process_input(input_data)
209
+ if not gloss:
210
+ return {
211
+ "status": "error",
212
+ "message": "Failed to process input"
213
+ }, None, ""
214
+
215
  print("ASL", gloss)
216
 
217
  # Split by spaces and clean each token
 
251
  if len(video_files) > 1:
252
  try:
253
  print(f"Creating stitched video from {len(video_files)} videos...")
254
+ stitched_video_path = tempfile.NamedTemporaryFile(
255
+ delete=False, suffix='.mp4'
256
+ ).name
257
  create_multi_stitched_video(video_files, stitched_video_path)
258
  print(f"Stitched video created: {stitched_video_path}")
259
  except Exception as e:
 
303
  "final_video_url": final_video_url
304
  }, final_video_url, download_html
305
 
 
 
 
306
 
307
+ def parse_vectorize_and_search_unified_sync(input_data):
308
+ return asyncio.run(parse_vectorize_and_search_unified(input_data))
309
 
310
+
311
+ def predict_unified(input_data):
312
  """
313
+ Unified prediction function that handles both text and file inputs
314
  """
315
+ try:
316
+ if input_data is None:
317
+ return {
318
+ "status": "error",
319
+ "message": "Please provide text or upload a document"
320
+ }, None, ""
321
+
322
+ # Use the unified processing function
323
+ result = parse_vectorize_and_search_unified_sync(input_data)
324
+ return result
325
+
326
+ except Exception as e:
327
+ print(f"Error in predict_unified function: {e}")
328
+ return {
329
+ "status": "error",
330
+ "message": f"An error occurred: {str(e)}"
331
+ }, None, ""
332
 
333
+
334
+ # Create the Gradio interface
335
+ def create_interface():
336
+ """Create and configure the Gradio interface"""
337
 
338
+ with gr.Blocks(title=title) as demo:
339
+ gr.Markdown(f"# {title}")
340
+ gr.Markdown(description)
341
+
342
+ with gr.Row():
343
+ with gr.Column():
344
+ # Input section
345
+ gr.Markdown("## Input Options")
346
 
347
+ # Text input
348
+ gr.Markdown("### Option 1: Enter Text")
349
+ text_input = gr.Textbox(
350
+ label="Enter text to convert to ASL",
351
+ placeholder="Type or paste your text here...",
352
+ lines=5,
353
+ max_lines=10
354
+ )
355
+
356
+ gr.Markdown("### Option 2: Upload Document")
357
+ file_input = gr.File(
358
+ label="Upload Document (pdf, txt, docx, or epub)",
359
+ file_types=[".pdf", ".txt", ".docx", ".epub"]
360
+ )
361
+
362
+ # Processing options
363
+ gr.Markdown("## Processing Options")
364
+ use_r2 = gr.Checkbox(
365
+ label="Use Cloud Storage (R2)",
366
+ value=True,
367
+ info=("Upload video to cloud storage for "
368
+ "persistent access")
369
+ )
370
+
371
+ process_btn = gr.Button(
372
+ "Generate ASL Video",
373
+ variant="primary"
374
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
375
 
376
+ with gr.Column():
377
+ # Output section
378
+ gr.Markdown("## Results")
379
+ json_output = gr.JSON(label="Processing Results")
380
+ video_output = gr.Video(label="ASL Video Output")
381
+ download_html = gr.HTML(label="Download Link")
382
+
383
+ # Handle the processing
384
+ def process_inputs(text, file, use_r2_storage):
385
+ # Determine which input to use
386
+ if text and text.strip():
387
+ # Use text input
388
+ input_data = text.strip()
389
+ elif file is not None:
390
+ # Use file input
391
+ input_data = file
392
+ else:
393
+ # No input provided
394
+ return {
395
+ "status": "error",
396
+ "message": "Please provide either text or upload a file"
397
+ }, None, ""
398
+
399
+ # Process using the unified function
400
+ return predict_unified(input_data)
401
+
402
+ process_btn.click(
403
+ fn=process_inputs,
404
+ inputs=[text_input, file_input, use_r2],
405
+ outputs=[json_output, video_output, download_html]
406
+ )
407
+
408
+ # Footer
409
+ gr.Markdown(article)
410
+
411
+ return demo
412
 
 
 
413
 
414
+ # For Hugging Face Spaces, use the Blocks interface
415
+ if __name__ == "__main__":
416
+ demo = create_interface()
417
+ demo.launch(
418
+ server_name="0.0.0.0",
419
+ server_port=7860,
420
+ share=True # Set to True for local testing with public URL
421
+ )
 
example_usage.py → examples/example_usage.py RENAMED
File without changes
examples/example_usage_dual_input.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Example: Using the AI-SL API with both text and file inputs
3
+
4
+ This demonstrates how the Gradio interface can handle both text input
5
+ and file uploads, using whichever one is provided.
6
+ """
7
+
8
+ from gradio_client import Client
9
+ import requests
10
+
11
+
12
+ def test_text_input():
13
+ """
14
+ Example 1: Using text input
15
+ """
16
+ print("=== Testing Text Input ===")
17
+
18
+ # Connect to your hosted app
19
+ client = Client("https://huggingface.co/spaces/your-username/your-space")
20
+
21
+ # Test with text input
22
+ text_input = "Hello world! This is a test of the text input functionality."
23
+
24
+ # Call the interface with text input
25
+ result = client.predict(
26
+ text_input, # Text input
27
+ None, # File input (None)
28
+ True, # Use R2 storage
29
+ api_name="/predict"
30
+ )
31
+
32
+ # Process results
33
+ json_data, video_url, download_html = result
34
+ print(f"Status: {json_data['status']}")
35
+ print(f"Video URL: {video_url}")
36
+
37
+ return video_url
38
+
39
+
40
+ def test_file_input():
41
+ """
42
+ Example 2: Using file input
43
+ """
44
+ print("=== Testing File Input ===")
45
+
46
+ # Connect to your hosted app
47
+ client = Client("https://huggingface.co/spaces/your-username/your-space")
48
+
49
+ # Test with file input
50
+ file_path = "example_document.txt"
51
+
52
+ # Call the interface with file input
53
+ result = client.predict(
54
+ "", # Text input (empty)
55
+ file_path, # File input
56
+ True, # Use R2 storage
57
+ api_name="/predict"
58
+ )
59
+
60
+ # Process results
61
+ json_data, video_url, download_html = result
62
+ print(f"Status: {json_data['status']}")
63
+ print(f"Video URL: {video_url}")
64
+
65
+ return video_url
66
+
67
+
68
+ def test_priority_logic():
69
+ """
70
+ Example 3: Testing the priority logic
71
+ """
72
+ print("=== Testing Priority Logic ===")
73
+
74
+ # Connect to your hosted app
75
+ client = Client("https://huggingface.co/spaces/your-username/your-space")
76
+
77
+ # Test with both inputs (text should take priority)
78
+ text_input = "This text should be processed instead of the file."
79
+ file_path = "example_document.txt"
80
+
81
+ # Call the interface with both inputs
82
+ result = client.predict(
83
+ text_input, # Text input
84
+ file_path, # File input
85
+ True, # Use R2 storage
86
+ api_name="/predict"
87
+ )
88
+
89
+ # Process results
90
+ json_data, video_url, download_html = result
91
+ print(f"Status: {json_data['status']}")
92
+ print(f"Gloss: {json_data['gloss']}")
93
+ print(f"Video URL: {video_url}")
94
+
95
+ return video_url
96
+
97
+
98
+ def download_video(video_url, output_path):
99
+ """
100
+ Download a video from URL
101
+ """
102
+ try:
103
+ response = requests.get(video_url, stream=True)
104
+ response.raise_for_status()
105
+
106
+ with open(output_path, 'wb') as f:
107
+ for chunk in response.iter_content(chunk_size=8192):
108
+ f.write(chunk)
109
+
110
+ print(f"Video downloaded to: {output_path}")
111
+ return True
112
+ except Exception as e:
113
+ print(f"Error downloading video: {e}")
114
+ return False
115
+
116
+
117
+ def main():
118
+ """
119
+ Run all examples
120
+ """
121
+ print("AI-SL Dual Input Testing")
122
+ print("=" * 50)
123
+
124
+ # Test text input
125
+ text_video_url = test_text_input()
126
+ if text_video_url:
127
+ download_video(text_video_url, "text_input_video.mp4")
128
+
129
+ print("\n" + "-" * 50 + "\n")
130
+
131
+ # Test file input
132
+ file_video_url = test_file_input()
133
+ if file_video_url:
134
+ download_video(file_video_url, "file_input_video.mp4")
135
+
136
+ print("\n" + "-" * 50 + "\n")
137
+
138
+ # Test priority logic
139
+ priority_video_url = test_priority_logic()
140
+ if priority_video_url:
141
+ download_video(priority_video_url, "priority_test_video.mp4")
142
+
143
+ print("\n" + "=" * 50)
144
+ print("Testing complete!")
145
+
146
+
147
+ if __name__ == "__main__":
148
+ main()