You are reading the article Youtube Recap: My Favorite 9To5Mac Videos Of 2023 updated in December 2023 on the website Cancandonuts.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Youtube Recap: My Favorite 9To5Mac Videos Of 2023
I’d never tried the Philips Hue Lightstrip before this video, but it’s safe to say that it immediately made me a firm believer. In nearly every video since then, the Hue Lightstrip Plus, with its ever-changing color, has been present in the background.
The Lightstrip Plus, along with other Hue-enabled HomeKit-compatible smart lights, can instantly lend some much-needed spice to a drab interior space. Sure, I’ve probably overdone it with the color schemes here and there, but it’s fun to paint the walls and other surfaces with light.
Prior to this video I had no earthly idea that so many people cared about installing Windows on Mac. Almost a million views later, and it’s safe to say that the interest is very much alive and well.
I’ve done several eGPU videos over the past year, but this was by far my favorite build due to the customized fan and the small, compact form factor of the Akitio Thunder3.
iOS 11 has been out for several months now, but this was the first in-depth hands-on look at many of the exciting new features and changes that Apple introduced back at WWDC 2023.
The 10.5-inch iPad Pro is, in my opinion, the best iPad iteration ever. Its new form factor, speed, and screen enhancements make it a no-brainer for those looking to do “real work” from an iPad. Coupled with iOS 11, which introduces the biggest change ever to the iPad from a software perspective, and you have a clear winner. That said, I’m very much looking to a slimmer-bezel design that ditches the Home button and adopts Face ID.
I didn’t make a lot of Mac hardware videos in 2023, but felt compelled to share how easy it was to upgrade the 5K iMac’s RAM for cheap. Apple charges an inordinate price for its RAM upgrades, so you stand to save some serious coin by performing the upgrade yourself. It’s just sad that the new iMac Pro doesn’t feature user-upgradable RAM.
It’s not everyday that a game-changing professional app lands on the iPad, but that’s just what happened with LumaFusion. A video editing app that includes many power-user features normally associated with desktop NLEs, LumaFusion continues to evolve into a competent mobile video editing solution, and is an outright steal at only $17.99.
When macOS High Sierra launched for developers, we were super excited to learn that Apple would officially be supporting external GPUs for the first time. Although said support is still in its early stages, the latest macOS High Sierra betas have shown considerable promise. The good news is that 2023 will bring forth even more eGPU milestones.
After using the iPhone X for a month, I’m happy with all of its new gesture based controls, OLED screen, and overall design. It’s easy to say that the latest iPhone is the best iPhone, but in this case I really mean it; it’s clearly the best one yet. In fact, it might be my favorite new product of 2023.
Video recap2023 was a huge year for our YouTube channel, and it will only get better in 2023. We have lots of new goodies planned for subscribers, including recurring shows and increased interaction with viewers. If you’ve yet to subscribe, you can use this link to do so now.
FTC: We use income earning auto affiliate links. More.
You're reading Youtube Recap: My Favorite 9To5Mac Videos Of 2023
How The Surface Pro 2 Became My Favorite Computer
A funny thing happened to me when I started playing around with a Surface Pro 2: It became my favorite computer.
At home in the officeTablet form factor aside, the Surface Pro 2 is every bit a Windows 8.1 computer, so it’s not surprising that most of the time, I use it in my office, sitting at my desk. Here, the optional docking station really makes it shine: I just set the Surface Pro 2 in the docking station, slide the connectors into place, and the device is instantly connected to my network and peripherals. When hooked up to a traditional keyboard, monitor, printer, and mouse or touchpad, I find the the Surface Pro 2 to be a more-than-adequate PC replacement.
Surface Pro 2 + docking station = great desktop replacement.
My workdays involve writing, communicating, and researching. So I spend the vast majority of my time in Microsoft Word, Outlook, and Internet Explorer. With most mobile devices, I’d have to use a stripped-down mobile or Web version of these applications or else find substitutes. I love how the Surface Pro 2 lets me run the full, uncompromised versions of Microsoft Office and all my other Windows software.
Generally, I prefer to use a touchpad rather than a mouse even with other operating systems and older versions of Windows. When using the Surface Pro 2 as a PC, though, I appreciate the touchpad even more because it feels like a more native and intuitive way of interacting with Windows 8.1. The touchpad provides some consistency between navigating the Surface Pro 2 as a desktop and using it as a tablet.
A truly portable PCA few days a week, I leave the confines of my office to work at my remote annex site—better known as my local Starbucks. If I had a true desktop PC, I’d need a supplementary laptop of some sort for these off-site excursions, and I’d need SkyDrive or Google Drive, or some other system, to keep files in sync between the two. The Surface Pro 2 is the PC I can take with me.
Microsoft offers the Touch Cover and Type Cover keyboard covers as optional add-ons, but they’re really necessities. I needed some sort of protective cover for the tablet display while mobile anyway, and the keyboard covers let me use the Surface Pro 2 like an Ultrabook when I choose.
With Type Cover is attached, the Surface Pro 2 rivals an Ultrabook.
The original Surface Pro had battery life that was mediocre at best. The two or three hours’ worth of juice put it on a par with most comparable laptops but made it woefully inadequate as a PC replacement. The Surface Pro 2, however, has vastly improved hardware that lets me get through days of working in the field without needing to recharge.
After-hours entertainmentThe Xbox SmartGlass app turns my Surface Pro 2 into a second screen at TV time.
I’ve also used the Surface Pro 2 with the Xbox SmartGlass app as a second screen to enhance my movie-watching experience, and I routinely read in bed using the Kindle app. I don’t recommend the Surface Pro 2 for extended reading sessions—the lower pixel density strains the eyes, and the extra weight makes it cumbersome to hold for long periods of time—but for getting in a chapter or two before drifting off to sleep, it’s adequate.
The Surface Pro 2 is ideal for reading periodicals: Since installing the Flipboard and Wired magazine apps, the Surface Pro 2 has been my go-to device for reading magazine articles while on a train or during a flight. It’s certainly much easier to read digital magazines on the Surface Pro 2 than on a laptop or smartphone.
The only device you needYou can find cheaper desktops, lighter tablets, and more powerful Ultrabooks. There are even alternative tablets that run Windows 8.1 Pro, like the Dell Venue or the Asus VivoTab. But for my purposes, none of them handle the daily transitions from PC to portable device, and from creating content to consuming it, as smoothly as the Surface Pro 2.
Why Can’t You Watch Private Videos On Youtube?
YouTube is today the 2nd most visited website in the world, with tens of billions of visits per month. As the largest video site in the world, there are 800 million videos on the platform as of 2023. However, not every video on YouTube is viewable by default. If you come across a YouTube video that displays “Video unavailable – This video is private”, it means the video has been set to private.
This guide explains why you can’t play private videos on YouTube, how one may allow others to watch his/her private videos, and how to change the visibility of a video to public, unlisted or private on YouTube.
Also see: How to Share and Collaborate a Playlist on YouTube with Friends
A private video on YouTube is a video that was marked as “Private” visibility by the uploader. As everyone can view your uploaded videos on YouTube, privacy is a great concern here. Therefore, YouTube allows users to freely set the visibility of each video they upload.
If the you set a video you upload as “private”, no one else can view the video except yourself and those you invited. Those who do not have the permission to access the video will see the “Video unavailable – This video is private” error message.
If you are not logged in to YouTube or your Google account, you will see a different message that says “Private video – sign in if you’ve been granted access to this video“.
The only way to watch a private video or stream on YouTube is to request permission from the uploader. The uploader will need to send an invite link to your Google account, and then you can view the video via the invite link.
Another way a private YouTube video can be viewable is when the uploader change the visibility of the video from Private to Public or Unlisted. That way everyone will have the permission to access the video.
Simply put, you cannot watch a private video without permission of the uploader. The same goes to your own videos. You can set a video as private so that no one else can view the video you upload.
Recommended Tip: How to Set YouTube to Always Play Highest Video Quality
To share a private YouTube video with a specific user, follow the steps below (screenshots are as of 2023).
In the Invitees column, enter the email addresses of users you want to invite to view your private video.
You may want to enable the “Notify via email” option to notify the users you invite by email.
Related: How to Undo or Clear YouTube “Not Interested” Feedback
After you’ve granted the permission from the uploader to access and watch the private YouTube video, you can directly open the URL to the private video on YouTube and you should be able to play it.
Note that you will need to sign in to your Google account in order to view the video if that account has been granted access to the private video.
Public means everyone on the Internet can view the video, whereas Unlisted means everyone can view the video but it won’t be listed anywhere on YouTube (e.g. YouTube Home, Suggestions, Related Videos and Searches). Unlisted videos can only be discovered and opened by manually visiting the video’s URL.
Build A Chatgpt For Youtube Videos With Langchain
Introduction
Have you ever wondered how good it would be to chat with a video? As a blog person myself, it often bores me to watch an hour-long video to find relevant information. Sometimes it feels like a job to watch a video to get any useful information out of it. So, I built a chatbot that lets you chat with YouTube videos or any video. This was made possible by GPT-3.5-turbo, Langchain, ChromaDB, Whisper, and Gradio. So, in this article, I will do a code walk-through of building a functional chatbot for YouTube videos with Langchain.
Learning Objectives
Build the web interface using Gradio
Handle YouTube videos and extract textual data from them using Whisper
Process and format texts appropriately
Create embeddings of text data
Configure Chroma DB to store data
Initialize a Langchain conversation chain with OpenAI chatGPT, ChromaDB, and embeddings function
Finally, querying and streaming answers to the Gradio chatbot
Before getting to the coding part, let’s get familiarized with the tools and technologies we will use.
This article was published as a part of the Data Science Blogathon.
LangchainThe Langchain is an open-source tool written in Python that makes Large Language Models data aware and agentic. So, what does that even mean? Most of the commercially available LLMs, such as GPT-3.5 and GPT-4, have a limit on the data they are trained on. For example, ChatGPT can only answer questions that it has already seen. Anything after September 2023 is unknown to it. This is the core issue that Langchain solves. Be it a Word doc or any personal PDF, we can feed the data to an LLM and get a human-like response. It has wrappers for tools like Vector DBs, Chat models, and embedding functions, which make it easy to build an AI application using just Langchain.
Langchain also allows us to build Agents – LLM bots. These autonomous agents can be configured for multiple tasks, including data analysis, SQL querying, and even writing basic codes. There are a lot of things we can automate using these agents. This is helpful as we can outsource low-level knowledge work to an LLM, saving us time and energy.
In this project, we will use Langchain tools to build a chat app for videos. For more information regarding Langchain, visit their official site.
WhisperWhisper is another progeny of OpenAI. It is a general-purpose speech-to-text model that can convert audio or videos into text. It is trained on a large amount of diverse audio to perform multi-lingual translation, speech recognition, and classification.
The model is available in five different sizes tiny, base, medium, small, and large, with speed and accuracy tradeoffs. The performance of models also depends on the language. The figure below shows a WER (Word Error Rate) breakdown by languages of Fleur’s dataset using the large-v2 model.
Vector DatabasesMost machine learning algorithms cannot process raw unstructured data like images, audio, video, and texts. They have to be converted into matrices of vector embeddings. These vector embeddings represent the said data in a multi-dimensional plane. To get embeddings, we need highly efficient deep-learning models capable of capturing the semantic meaning of data. This is highly important for making any AI app. To store and query this data, we need databases capable of handling them effectively. This resulted in the creation of specialized databases called vector databases. There are multiple open-source databases are there. Chroma, Milvus, Weaviate, and FAISS are some of the most popular.
Another USP of vector stores is that we can perform high-speed search operations on unstructured data. Once we get the embeddings, we can use them for clustering, searching, sorting, and classification. As the data points are in a vector space, we can calculate the distance between them to know how closely they are related. Multiple algorithms like Cosine Similarity, Euclidean Distance, KNN, and ANN (Approximate Nearest Neighbour) are used to find similar data points.
We will use Chroma vector store – an open-source vector database. Chroma also has Langchain integration, which will come in very handy.
GradioThe fourth horseman of our app Gradio is an open-source library to share machine learning models easily. It can also help build demo web apps with its components and events with Python.
If you are unfamiliar with Gradio and Langchain, read the following articles before moving ahead.
Let’s now start building it.
Setup Dev EnvTo set up the development environment, create a Python virtual environment or create a local dev environment with Docker.
Now install all these dependencies
pytube==15.0.0 gradio == 3.27.0 openai == 0.27.4 langchain == 0.0.148 chromadb == 0.3.21 tiktoken == 0.3.3 openai-whisper==20230314 Import Libraries import os import tempfile import whisper import datetime as dt import gradio as gr from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationalRetrievalChain from pytube import YouTube from typing import TYPE_CHECKING, Any, Generator, List Create Web InterfaceWe will use Gradio Block and components to build the front end of our application. So, here’s how you can make the interface. Feel free to customize as you see fit.
with gr.Blocks() as demo: with gr.Row(): # with gr.Group(): with gr.Column(scale=0.70): api_key = gr.Textbox(placeholder='Enter OpenAI API key', show_label=False, interactive=True).style(container=False) with gr.Column(scale=0.15): change_api_key = gr.Button('Change Key') with gr.Column(scale=0.15): remove_key = gr.Button('Remove Key') with gr.Row(): with gr.Column(): chatbot = gr.Chatbot(value=[]).style(height=650) query = gr.Textbox(placeholder='Enter query here', show_label=False).style(container=False) with gr.Column(): video = gr.Video(interactive=True,) start_video = gr.Button('Initiate Transcription') gr.HTML('OR') yt_link = gr.Textbox(placeholder='Paste a YouTube link here', show_label=False).style(container=False) yt_video = gr.HTML(label=True) start_ytvideo = gr.Button('Initiate Transcription') gr.HTML('Please reset the app after being done with the app to remove resources') reset = gr.Button('Reset App') if __name__ == "__main__": demo.launch()The interface will appear like this
Here, we have a textbox that takes the OpenAI key as input. And also two keys for changing the API key and deleting the key. We also have a chat UI on the left and a box for rendering local videos on the right. Immediately below the video box, we have a box asking for a YouTube link and buttons that say “Initiate Transcription.”
Gradio EventsNow we will define events to make the app interactive. Add the below codes at the end of the gr.Blocks().
outputs=[start2, yt_video]).then( fn=embed_video, inputs=
, outputs= ).success( fn=lambda:resume, outputs=[start2]) outputs=[start1,video]).then( fn=embed_yt, inputs=[yt_link], outputs = [yt_video, chatbot]).success( fn=lambda:resume, outputs=[start1]) query.submit(fn=add_text, inputs=[chatbot, query], outputs=[chatbot]).success( fn=QuestionAnswer, inputs=[chatbot,query,yt_link,video], outputs=[chatbot,query]) api_key.submit(fn=set_apikey, inputs=api_key, outputs=api_key)
query: Responsible for streaming response from LLM to the chat UI.
The rest of the events are for handling the API key and resetting the app.
We have defined the events but haven’t defined the functions responsible for triggering events.
BackendTo not make it complicated and messy, we will outline the processes we will be dealing with in the backend.
Handle API keys.
Handle Uploaded video.
Transcribe videos to get texts.
Create chunks out of video texts.
Create embeddings from texts.
Store vector embeddings in the ChromaDB vector store.
Create a Conversational Retrieval chain with Langchain.
Send relevant documents to the OpenAI chat model (gpt-3.5-turbo).
Fetch the answer and stream it on chat UI.
We will be doing all these things along with a few exception handling.
Define a few environment variables.
chat_history = [] result = None chain = None run_once_flag = False call_to_load_video = 0 enable_box = gr.Textbox.update(value=None,placeholder= 'Upload your OpenAI API key', interactive=True) disable_box = gr.Textbox.update(value = 'OpenAI API key is Set', interactive=False) remove_box = gr.Textbox.update(value = 'Your API key successfully removed', interactive=False) pause = gr.Button.update(interactive=False) resume = gr.Button.update(interactive=True) update_video = gr.Video.update(value = None) update_yt = gr.HTML.update(value=None) Handle API Keys enable_box = gr.Textbox.update(value=None,placeholder= 'Upload your OpenAI API key', interactive=True) disable_box = gr.Textbox.update(value = 'OpenAI API key is Set',interactive=False) remove_box = gr.Textbox.update(value = 'Your API key successfully removed', interactive=False) def set_apikey(api_key): os.environ['OPENAI_API_KEY'] = api_key return disable_box def enable_api_box(): return enable_box def remove_key_box(): os.environ['OPENAI_API_KEY'] = '' return remove_box Handle VideosNext up, we will be dealing with uploaded videos and YouTube links. We will have two different functions dealing with each case. For YouTube links, we will create an iframe embed link. For each case, we will call another function make_chain() responsible for creating chains.
def embed_yt(yt_link: str): # This function embeds a YouTube video into the page. # Check if the YouTube link is valid. if not yt_link: raise gr.Error('Paste a YouTube link') # Set the global variable `run_once_flag` to False. # This is used to prevent the function from being called more than once. run_once_flag = False # Set the global variable `call_to_load_video` to 0. # This is used to keep track of how many times the function has been called. call_to_load_video = 0 # Create a chain using the YouTube link. make_chain(url=yt_link) # Get the URL of the YouTube video. url = yt_link.replace('watch?v=', '/embed/') # Create the HTML code for the embedded YouTube video. embed_html = f"""<iframe width="750" height="315" src="{url}" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" # Return the HTML code and an empty list. return embed_html, [] # This function embeds a video into the page. # Check if the video is valid. if not video: raise gr.Error('Upload a Video') # Set the global variable `run_once_flag` to False. # This is used to prevent the function from being called more than once. run_once_flag = False # Create a chain using the video. make_chain(video=video) # Return the video and an empty list. return video, [] Create ChainThis is one of the most important steps of all. This involves creating a Chroma vector store and Langchain chain. We will use a Conversational retrieval chain for our use case. We will use OpenAI embeddings, but for actual deployments, use any free embedding models like Huggingface sentence encoders, etc.
global chain, run_once_flag
# Check if a YouTube link or video is provided if not url and not video: raise gr.Error('Please provide a YouTube link or Upload a video')
if not run_once_flag: run_once_flag = True # Get the title from the YouTube link or video title = get_title(url, video).replace(' ','-')
# Process the text from the video grouped_texts, time_list = process_text(url=url) if url else process_text(video=video)
# Convert time_list to metadata format time_list = [{'source': str(t.time())} for t in time_list]
# Create vector stores from the processed texts with metadata vector_stores = Chroma.from_texts(texts=grouped_texts, collection_name='test', embedding=OpenAIEmbeddings(), metadatas=time_list)
# Create a ConversationalRetrievalChain from the vector stores chain = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0.0), retriever= vector_stores.as_retriever( search_kwargs={"k": 5}), return_source_documents=True)
return chain
Get texts and metadata from either YouTube URL or video file.
Create a Chroma vector store from texts and metadata.
Build a chain using OpenAI gpt-3.5-turbo and chroma vector store.
Return chain.
Process TextsIn this step, we will do appropriate slicing of texts from videos and also create the metadata object we used in the above chain-building process.
global call_to_load_video
if call_to_load_video == 0: print(‘yes’) # Call the process_video function based on the given video or URL result = process_video(url=url) if url else process_video(video=video) call_to_load_video += 1
texts, start_time_list = [], []
# Extract text and start time from each segment in the result for res in result[‘segments’]: start = res[‘start’] text = res[‘text’]
start_time = dt.datetime.fromtimestamp(start) start_time_formatted = start_time.strftime(“%H:%M:%S”)
texts.append(”.join(text)) start_time_list.append(start_time_formatted)
texts_with_timestamps = dict(zip(texts, start_time_list))
# Convert the timestamp strings to datetime objects formatted_texts = { text: dt.datetime.strptime(str(timestamp), ‘%H:%M:%S’) for text, timestamp in texts_with_timestamps.items() }
grouped_texts = [] current_group = ” time_list = [list(formatted_texts.values())[0]] previous_time = None time_difference = dt.timedelta(seconds=30)
# Group texts based on time difference for text, timestamp in formatted_texts.items():
if previous_time is None or timestamp – previous_time <= time_difference: current_group += text else: grouped_texts.append(current_group) time_list.append(timestamp) current_group = text previous_time = time_list[-1]
# Append the last group of texts if current_group: grouped_texts.append(current_group)
return grouped_texts, time_list
The process_text function takes either a URL or a Video path. This video is then transcribed in the process_video function, and we get the final texts.
We then get the start time of each sentence (from Whisper) and group them in 30 seconds.
We finally return the grouped texts and starting time of each group.
Process VideoIn this step, we transcribe video or audio files and get texts. We will use the Whisper base model for transcription.
if url: file_dir = load_video(url) else: file_dir = video
print(‘Transcribing Video with whisper base model’) model = whisper.load_model(“base”) result = model.transcribe(file_dir)
return result
For YouTube videos, as we cannot directly process them, we will have to handle them separately. We will use a library called Pytube to download the audio or video of the YouTube video. So, here’s how you can do it.
# Create a YouTube object for the given URL. yt = YouTube(url)
# Get the target directory. target_dir = os.path.join(‘/tmp’, ‘Youtube’)
# If the target directory does not exist, create it. if not os.path.exists(target_dir): os.mkdir(target_dir)
# Get the audio stream of the video. stream = yt.streams.get_audio_only()
# Download the audio stream to the target directory. stream.download(output_path=target_dir)
# Get the path of the downloaded file. path = target_dir + ‘/’ + yt.title + ‘.mp4’
# Return the path of the downloaded file. return path
Create a YouTube object for the given URL.
Create a temporary target directory path
Check if the path exists else create the directory
Download the audio of the file.
Get the path directory of the video
This was the bottom-up process from getting texts from videos to creating the chain. Now, all that remains is configuring the chatbot.
Configure ChatbotAll we need now is to send a query and a chat_history to it to fetch our answers. So, we will define a function that only triggers when a query is submitted.
def add_text(history, text): if not text: raise gr.Error('enter text') history = history + [(text,'')] return history # This function answers a question using a chain of models. # Check if a YouTube link or a local video file is provided. if video and url: # Raise an error if both a YouTube link and a local video file are provided. raise gr.Error('Upload a video or a YouTube link, not both') elif not url and not video: # Raise an error if no input is provided. raise gr.Error('Provide a YouTube link or Upload a video') # Get the result of processing the video. result = chain({"question": query, 'chat_history': chat_history}, return_only_outputs=True) # Add the question and answer to the chat history. chat_history += [(query, result["answer"])] # For each character in the answer, append it to the last element of the history. for char in result['answer']: history[-1][-1] += char yield history, ''We provide the chat history with the query to keep the context of the conversation. Finally, we stream the answer back to the chatbot. And don’t forget to define the reset functionality to reset all the values.
So, this was all about it. Now, launch your application and start chatting with videos.
This is how the final product looks
Video Demo:
Real-life Use casesAn application that lets end-user chat with any video or audio can have a wide range of use cases. Here are some of the real-life use cases of this chatbot.
Education: Students often go through hours-long video lectures. This chatbot can aid students in learning from lecture videos and extract useful information quickly, saving time and energy. This will significantly improve the learning experience.
Legal: Law professionals often go through lengthy legal proceedings and depositions to analyze the case, prepare documents, research, or compliance monitoring. A chatbot like this can go a long way in decluttering such tasks.
Content Summarization: This app can analyze video content and generate summarized text versions. This lets the user grasp highlights of the video without watching it entirely.
Customer Interaction: Brands can incorporate a video chatbot feature for their products or services. This can be helpful for businesses that sell products or services that are high-ticket or that require a lot of explanation.
Video Translation: We can translate the text corpus to other languages. This can facilitate cross-lingual communication, language learning, or accessibility for non-native speakers.
These are some of the potential use cases I could think of. There can have a lot more useful applications of a chatbot for videos.
ConclusionSo, this was all about building a functional demo web app for a chatbot for videos. We covered a lot of concepts throughout the article. Here are the key takeaways from the article.
We learned about Langchain – a popular tool for creating AI applications with ease.
Whisper is a potent speech-to-text model by OpenAI. An open-source model that can convert audio and videos to text.
We learned how vector databases facilitate the effective storing and querying of vector embeddings.
We built a completely functional web app from scratch using Langchain, Chroma, and OpenAI models.
We also discussed potential real-life use cases of our chatbot.
This was all about it hope you liked it, and do consider following me on Twitter for more things related to development.
GitHub Repository: sunilkumardash9/chatgpt-for-videos. If you find this helpful, do ⭐ the repository.
Frequently Asked QuestionsRelated
2 Best Sites To Trim & Crop Youtube Videos
Have you ever wanted to send a friend or colleague a short and important snippet of a YouTube video without linking them to the entire thing? Although linking to the specific timestamp of a YouTube video is possible, it isn’t supported on every device and can be rather glitchy.
Along with making video clips shareable with friends, cropping and trimming YouTube videos is a tactic that many viral marketers and influencers use. You’ve probably seen instances of a Twitter user tweeting part of a video and getting thousands of retweets, right? How do they make it look so easy?
Table of Contents
To crop and trim a YouTube video, it doesn’t take expertise with a program like Vegas Pro. All you need is a video link and to know which video trimming web service is best.
In this article, let’s go over the two best websites that will allow you to instantly trim and crop a YouTube video without downloading any special applications.
YT Cutter is our pick for the best overall web-based YouTube trimmer and downloader. It has a clean and intuitive interface and provides numerous download options.
The process is simple. First, paste the URL of the YouTube video you’d like to trim and press the Enter key or Start button (on the page).
Video file: An MP4 file of your clip (with audio)
GIF animations: An animated GIF image of your clip
Audio file: An MP3 file of your clip (no video)
Screenshot: A high-resolution screenshot of the start of your clip
In rare instances after selecting a format, you may get an error stating that a rate limit set by YouTube has been exhausted. If you wait several seconds before trying again, your download should start successfully. If not, give it a bit of time and try again.
ytCropper functions a bit differently than YT Cutter, but it’s nice to have alternatives and options when it comes to trimming YouTube videos.
One immediate downside of cropping on ytCropper is that this system does not support fractions of seconds, meaning your clip may be less precise. On the upside, these markers act as visual indicators for you to see where exactly along the play bar your cropped clip is located. It makes extending or shortening the selection very easy.
ytCropper does not offer to directly download cropped clips, and all it really does is embed the YouTube video on a page where it will start and stop at the times you’ve selected. One interesting feature it does support is looping—this is particularly useful for when you’re cropping your favorite part out of a song.
The video is shareable by the direct link provided on the page.
While we’d love to offer more alternatives, unfortunately, a lot of the YouTube trimming services out there are flat-out broken or insufficient. For example, YouTube Trimmer has still not been updated to fix the way it uses deprecated YouTube URL parameters.
Kapwing Video Trimmer looks and feels like a great service, until you get to the part where you process your clips and realize you’ll need to sign up or deal with watermarks. It also doesn’t provide the best method of sharing your clips.
HeseTube was once a go-to solution for cutting and downloading YouTube videos, but now it’s riddled with “can’t process the video” errors. You’ll see these more often than not.
Luckily, both sites we’ve listed in this article fill particular voids: YT Cutter is great for downloading and keeping cropped YouTube videos, and ytCropper is great for linking to cropped versions of YouTube videos. We hope they can be helpful!
How To Play Youtube Videos Using Video Js Player
In this tutorial, we’re going to learn the procedure to play YouTube videos using the chúng tôi player. chúng tôi is a very popular modern web video player which supports all the latest video formats including YouTube, Vimeo, etc.
Now, we’ll see how the chúng tôi library can be used for playing YouTube videos using the ‘videojs-youtube’ package.
For playing YouTube videos in the chúng tôi player we need to install a package ‘videojs-youtube’ in our project. Installation of the package is very easy and can be done using bower or node package manager.
Installing chúng tôi YouTubeUse the following command for installation of ‘videojs-youtube’ using npm −
npm install videojs-youtubeIf you are using bower as your package manager, then ‘videojs-youtube’ can be installed using the below command
bower install videojs-youtubeRunning the above commands in the terminal of your project, will install the package and we can start using it by importing ‘dist/Youtube.min.js’ file. Consider the below code snippet for adding chúng tôi YouTube package in the project.
Path the file correctly as it is very important for the package to work properly.
Now that we’ve added and imported the package in our project, lets learn how we can actually play YouTube videos using it.
Playing YouTube Videos using the chúng tôi YouTube PackageFor playing a YouTube video, we need to make some changes in the data-setup attribute to the video element tag.
First, we need to set the techOrder option in data-setup as ‘youtube’. Secondly, we have to pass the sources array, with the video URLs and their mime type as ‘video/youtube’ in data-setup as parameters.
Consider the below code for adding a YouTube video to the video element −
<video
id
=
"my-video"
class
=
"video-js vjs-big-play-centered vjs-default-skin"
controls
preload
=
"auto"
fluid
=
"true"
data-setup
=
'{"techOrder": ["youtube"], "sources": [{ "type": "video/youtube", "src":
As you can observe, in the code snippet, we've set the tech order as YouTube and passed the sources array.
Notice that the sources tag is of array type which contains an array of JSON objects where each object has the type and URL of the video that we want to play. We can add multiple JSON objects for multiple videos. Also, make sure the type of video is 'video/youtube'.
Example 1The complete example of including the chúng tôi YouTube plugin and playing a YouTube video in the video player will look something like this.
<
video id=
"my-video"
class
=
"video-js vjs-big-play-centered"
controls preload=
"auto"
fluid=
"true"
data-
setup=
'{
"techOrder"
:
[
"youtube"
]
,
"sources"
:
[
{
"type"
:
"video/youtube"
,
"src"
:
var
player=
videojs
(
'my-video'
)
;
You'll notice that our video player has chúng tôi default controls. If you want to change the YouTube controls, then we need to pass one more option in data setup attribute called 'ytControls'. Since controls is a keyword in chúng tôi by default this one is called 'ytControls'.
Example 2Adding YouTube controls in the above example −
<
video id=
"my-video"
class
=
"video-js vjs-big-play-centered"
controls preload=
"auto"
fluid=
"true"
data-
setup=
'{
"techOrder"
:
[
"youtube"
]
,
"sources"
:
[
{
"type"
:
"video/youtube"
,
"src"
:
var
player=
videojs
(
'my-video'
)
;
If you execute the above code will display the youtube controls on our video player instead of the default chúng tôi controls.
Now, we've created a video player which plays YouTube videos and changed the controls to YouTube controls. You can also set additional parameters on the video player using the 'customVars' parameter.
For example, if we need to set the window mode of the player as transparent, we can do so by
data
-
setup
=
'{ "techOrder": ["youtube"], "sources": [{ "type": "video/youtube", "src": "youtube": { "ytControls": 2, "customVars": { "wmode": "transparent"} }'
The complete working example of youtube video player using 'ytControls' and 'customVars' will look something like this.
Example<
video id
=
"my-video"
class
=
"video-js vjs-big-play-centered vjs-theme-sea"
controls preload
=
"auto"
fluid
=
"true"
data
-
setup
=
'
{
"techOrder"
:
[
"youtube"
]
,
"sources"
:
[
{
"type"
:
"video/youtube"
,
"src"
:
"youtube"
:
{
"ytControls"
:
2
,
"customVars"
:
{
"wmode"
:
"transparent"
}
}
'
var
player
=
videojs
(
'my-video'
)
;
Execution the above example is going to create a youtube video player with youtube controls and the window mode as set to transparent. You can use any chúng tôi option with the chúng tôi YouTube package.
ConclusionIn this tutorial, we understood how to play YouTube videos using chúng tôi First, we imported the chúng tôi YouTube plugin, which is responsible for playing YouTube videos in our video player. Later, we learned how to display the YouTube controls instead of the default chúng tôi controls with the assistance of an example.
Update the detailed information about Youtube Recap: My Favorite 9To5Mac Videos Of 2023 on the Cancandonuts.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!