My project is based on gen ai in that i need to show 3 tasks in single project. The user wants to give an input as image and prompt for these 3 tasks to do. 1st if the user wants to give the image and prompt the result should be in animated image in which the prompt should reflect on the animated image. 2nd task should be object replacement if user give an input image and prompt in that prompt the user mentioned what to replace with that igiven image. 3rd task should be object removal if user give an input image and prompt in that prompt the user mentioned what to remove the object in thatgiven image. By combineing these 3 tasks in single project make in python and use relevent gen ai algorithm to do effective project. And also show me how to run this whole project step by step.
I'll create a comprehensive Python application that combines three generative AI image manipulation tasks in a single project:
Let's implement this step by step using Python and relevant generative AI libraries.
animate_gen_ai.py
requirements.txt
download_models.py
First, create a virtual environment and install the required dependencies:
# Create a virtual environment
python -m venv venv
# Activate the virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install the required packages
pip install -r requirements.txt
Before running the main application, you need to download the necessary AI models:
python download_models.py
This script will download:
Note: This step might take some time as the models are large (several GB in total).
Once all the models are downloaded, you can run the main application:
python animate_gen_ai.py
This will start a Gradio web interface that you can access in your browser (typically at http://127.0.0.1:7860
Select a Task:
Enter a Prompt:
The animation task uses Stable Diffusion's img2img pipeline to create multiple frames with slight variations. These frames are then combined into a GIF animation. The strength parameter controls how much the original image is preserved versus how much is generated based on the prompt.
For object replacement, the application:
Object removal works similarly to replacement but uses a neutral prompt for inpainting to blend the object with the background, effectively removing it from the image.
how to fix this issue python download_models.py
Starting model downloads...
Downloading SAM model checkpoint...
Downloading SAM checkpoint from https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
Downloaded SAM checkpoint to sam_vit_h_4b8939.pth
Downloading Stable Diffusion models...
Downloading Stable Diffusion v1.5 for image-to-image...
model_index.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 541/541 [00:00<00:00, 2.09MB/s]
C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\file_download.py:144: UserWarning: huggingface_hub
cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\jesin.cache\huggingface\hub\models--runwayml--stable-diffusion-v1-5. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
scheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 308/308 [00:00<00:00, 250kB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 4.72k/4.72k [00:00<00:00, 6.72MB/s]
special_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [00:00<00:00, 1.85MB/s]
preprocessor_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 1.46MB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 743/743 [00:00<00:00, 2.77MB/s]
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 806/806 [00:00<00:00, 3.53MB/s]
C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\file_download.py:752: UserWarning: Not enough free disk space to download the file. The expected file size is: 3438.17 MB. The target location C:\Users\jesin.cache\huggingface\hub\models--runwayml--stable-diffusion-v1-5\blobs only has 1520.46 MB free disk space. | 0.00/743 [00:00<?, ?B/s]
warnings.warn(.json: 0%| | 0.00/806 [00:00<?, ?B/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 2.78MB/s]
vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:03<00:00, 349kB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 617/617 [00:00<00:00, 2.08MB/s]
merges.txt: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 668kB/s]
model.safetensors: 31%|██████████████████████████████ | 377M/1.22G [05:34<12:22, 1.13MB/s]
Fetching 15 files: 20%|████████████████████▏ | 3/15 [05:35<22:22, 111.89s/it]
diffusion_pytorch_model.safetensors: 10%|███████▉ | 346M/3.44G [05:36<50:11, 1.03MB/s]
model.safetensors: 60%|███████████████████████████████████████████████████████████ | 294M/492M [05:39<03:49, 865kB/s]
diffusion_pytorch_model.safetensors: 97%|██████████████████████████████████████████████████████████████████████████████▋ | 325M/335M [05:42<00:10, 949kB/s]
Traceback (most recent call last):s: 88%|███████████████████████████████████████████████████████████████████████ | 294M/335M [05:26<00:48, 841kB/s]
File "C:\AnimateGenAi\download_models.py", line 62, in <module> | 346M/3.44G [05:36<49:50, 1.03MB/s]
download_diffusion_models()sors: 91%|█████████████████████████████████████████████████████████████████████████▌ | 304M/335M [05:36<00:34, 897kB/s]
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\AnimateGenAi\download_models.py", line 32, in download_diffusion_models█████████████████████████████████████▋ | 325M/335M [05:42<00:06, 1.43MB/s]
StableDiffusionImg2ImgPipeline.from_pretrained(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
"runwayml/stable-diffusion-v1-5",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\diffusers\pipelines\pipeline_utils.py", line 773, in from_pretrained
cached_folder = cls.download(
pretrained_model_name_or_path,
...<14 lines>...
**kwargs,
)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\diffusers\pipelines\pipeline_utils.py", line 1557, in download
cached_folder = snapshot_download(
pretrained_model_name,
...<7 lines>...
user_agent=user_agent,
)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub_snapshot_download.py", line 296, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class or hf_tqdm,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\tqdm\contrib\concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\tqdm\contrib\concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\tqdm\std.py", line 1181, in iter
for obj in iterable:
^^^^^^^^
File "C:\Python313\Lib\concurrent\futures_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "C:\Python313\Lib\concurrent\futures_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "C:\Python313\Lib\concurrent\futures_base.py", line 456, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "C:\Python313\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Python313\Lib\concurrent\futures\thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub_snapshot_download.py", line 270, in _inner_hf_hub_download
return hf_hub_download(
repo_id,
...<15 lines>...
headers=headers,
)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\file_download.py", line 961, in hf_hub_download
return _hf_hub_download_to_cache_dir(
# Destination
...<14 lines>...
force_download=force_download,
)
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\file_download.py", line 1112, in _hf_hub_download_to_cache_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=Path(blob_path + ".incomplete"),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
xet_file_data=xet_file_data,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\file_download.py", line 1675, in _download_to_tmp_and_move
http_get(
~~~~~~~~^
url_to_download,
^^^^^^^^^^^^^^^^
...<4 lines>...
expected_size=expected_size,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\jesin\AppData\Roaming\Python\Python313\site-packages\huggingface_hub\file_download.py", line 452, in http_get
temp_file.write(chunk)
~~~~~~~~~~~~~~~^^^^^^^
OSError: [Errno 28] No space left on device
The error you're encountering is due to insufficient disk space. The logs show that you need approximately 3.4GB to download the Stable Diffusion model, but you only have about 1.5GB available on your drive.
The most straightforward solution is to free up space on your drive:
# On Windows, you can use Disk Cleanup utility
# Or manually delete unnecessary files, empty recycle bin, etc.
If you have another drive with more space, you can redirect the Hugging Face cache:
download_models_alt.py
If disk space is a persistent issue, you can modify the project to use smaller models:
download_models_smaller.py
If you choose option 3, you'll need to update the main script to use these smaller models:
animate_gen_ai_smaller.py
If you continue to face disk space issues, consider running the project on Google Colab, which provides free GPU resources and sufficient disk space:
Check available space on all drives:
Choose the appropriate solution:
Run the modified download script:
python download_models_alt.py # or download_models_smaller.py
Run the main application:
python animate_gen_ai.py # or animate_gen_ai_smaller.py if using smaller models
Remember that the smaller models will generally produce lower quality results but require less disk space and memory.