Polars: The Next Big Python Data Science Library... written in RUST?

Polars: The Next Big Python Data Science Library... written in RUST?

Rob Mulla

2 года назад

182,764 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@grppc1
@grppc1 - 08.05.2023 18:02

ana mafdtinich

Ответить
@footkol
@footkol - 25.05.2023 22:21

Thank you for informative video. May I ask what software are you using in it? Is it JupyterLab?

Ответить
@theTenorDrummer
@theTenorDrummer - 01.06.2023 21:09

Hey, I'm learning Python and want to eventually be able to analyze a drummer's rhythmic timing vs. a "perfect" performance. Definitely stealing a few nuggets from this. Thanks! Anyone out there want to help me out???

Ответить
@durgaganesh423
@durgaganesh423 - 19.06.2023 20:22

Hi
Can you help for finding glitches or audio obnormalites from wav file

Ответить
@nixboaski
@nixboaski - 07.07.2023 01:20

This is so interesting.

A few days ago I wanted to produce a digital reproduction of a particular musical note, using the note as the basis and its harmonics (I was analysing A=440Hz, but I wrote the script in such a way I could alter that). So I had basically two aspects to take into account: the frequencies and its amplitudes.

I recorded a note from the piano, cleaned it of noise as much as I could and extracted the amplitudes from it for each frequency that forms an A note. It was terrible! The final result sounded ghastly.

Your video will help me understand how I must proceed to make a digital sound that makes more sense. I totally would like to learn how to use machine learning on audio processing too.

Ответить
@dannybee9068
@dannybee9068 - 07.07.2023 13:23

Hello! Thank you for the excellent video! I have a question though: What is the difference in use cases between STFT and Melspectrogram? Both methods appear to extract features for the model, but in distinct ways. I am interested in understanding when one is more advantageous than the other. For example for sentiment analysis, I think melspec seems more appropriate but it's nothing more than a guess with a bit of intuion, and feels like if we feel with a speech its better to use melspec and any other sound stft

Ответить
@AnimeSyncInfinite
@AnimeSyncInfinite - 17.08.2023 17:18

I want to mimick others voice with my voice. In short i will give a small audio sample as a input (for example my voice) and the code will get the various charateristics of my voice so that i can manipulate it with audio of some other person's voice. Is it possible to do it in python?

Ответить
@danieldanielineto7228
@danieldanielineto7228 - 01.09.2023 19:40

Great Job, obrigado.

Ответить
@Zizos
@Zizos - 20.09.2023 04:22

As a "I understand what's going on but not a coder" I understand that it would take me months if not years to create what I want.
How hard would it be to create a audio visualizer plugin? Like make a plugin for a video editor that takes a audio track, analyzes frequencies with custom ranges and drives parameters based on loudness of the frequency ranges you've set up?
I'd have to learn how to manage data, memory, incorporate into video editor, libraries, compiling and who knows what else... ah yes, more than basic coding.

Ответить
@mudasserahmad6076
@mudasserahmad6076 - 24.09.2023 13:36

Hi Rob interesting video. My task is to create mel spectrograms with windows length 93ms 46ms and 23 ms .And then combine them i need one i am confused with this like (128,216,3) what does 3 shows here. 128 in nmels 128 and 216 nu ber of frames.

Ответить
@googul2041
@googul2041 - 09.10.2023 15:57

Glob, librosa, wavered, ivi kakundaa hasalu dsp audio yenduku work avataledu adigo librosa

Ответить
@LucaMatts
@LucaMatts - 10.10.2023 02:00

What are the y values that you first extract?

Ответить
@shashankgsharma0901
@shashankgsharma0901 - 16.10.2023 05:20

you're not working in jupyter?

Ответить
@shaimaalbalushi1739
@shaimaalbalushi1739 - 21.10.2023 16:33

what are you using as notepad to write the codes

Ответить
@amuigayle2231
@amuigayle2231 - 06.11.2023 23:49

Ill probably never get a reply to this but is it that its either or with the STFT and the Mel spectrogram? Why did u not create the Mel spec from the transformed data?

Ответить
@omart9411
@omart9411 - 05.12.2023 14:19

it's a shame that this is such a low level tutoial but you assume that I'm already familiar with the meaning of the terminology.

Ответить
@sphyrnidae6749
@sphyrnidae6749 - 13.12.2023 19:18

Hi Rob! Thank you for your videos. You inspired me to start digging deep into DataScience. I read a lot of books, watched almost all your videos and did some courses on Coursera.
Do you have any recommendations how to train now on real data.
I do some work now with some fields of intresst data but i think it would be great to have a community or at least at the very beginning some kind of guided projects. I discoverd data scratch. Do you recommend something like this?

Ответить
@sporttyt
@sporttyt - 24.12.2023 21:18

You can create me product?

Ответить
@pywidem5823
@pywidem5823 - 04.01.2024 00:08

Working in both audio and IT, this sample rate display in your files feel like they're halved. To be able to display a frequency accurate, you'd need 2xfrequency as the sample rate, therefore it would be 44.1khz (which is much more common and I have never seen the option to record witch 22050hz). With 22050 you would have data representing only up to roughly 10khz when accounting for the inquest filter.

Ответить
@pycsr-by-pankajsoni
@pycsr-by-pankajsoni - 09.01.2024 16:08

Very nicely explained!!🙂

Ответить
@RKYT0
@RKYT0 - 02.02.2024 19:20

if you want to increase the resolution on the x axis you can increase the sr. But how do you increase the resolution of the frequency on the y axis?

Edit: It seems quite hard to use this code to shift the frequencies as the frequencies are coded in the iteration of the db matrix ... that was my actual aim because it seemed other software kind of compressed alot of data which mostly seems to be accounted to the mel or log scale... i think. If you want to simple shift 1000 hz to 100 hz you lose a lot of frequencies, which could be compensated with higher y-resolution... but i guess there are more clever methods?

Ответить
@yusufcan1304
@yusufcan1304 - 23.02.2024 10:58

thanks man

Ответить
@Lokesh-dt4mp
@Lokesh-dt4mp - 24.02.2024 04:58

Hey Rob good to see u .. But one thg background music is disgusting kindly stop that upcoming videos ...

Ответить
@riittap9121
@riittap9121 - 11.03.2024 21:52

This video would be very helpful, but the distracting background music makes it really difficult to follow the content 🙁

Ответить
@sarthakkumar8696
@sarthakkumar8696 - 29.03.2024 21:32

i have no words to express how helpful this was!!! really thank you

Ответить
@antony830
@antony830 - 08.04.2024 03:13

Thanks Rob. How to upload files so that this works (audio_files = glob('../input/ravdess-emotional-speech-audio/*/*.wav'))? Are you using a website or software in this video to do Python? I just started to know Python

Ответить
@pfrmusic_remix
@pfrmusic_remix - 19.04.2024 13:29

Awsomeeee!!!!, can I feed the CNN network by melspectograms ?

Ответить
@MohamedElsayed-c2u
@MohamedElsayed-c2u - 10.05.2024 19:41

can I make this project using vs code?

Ответить
@franciscomolano5202
@franciscomolano5202 - 19.05.2024 03:37

how to setup a real time data from opencv, or using python , using usb bluetooth adapter for spectrogram?
cant find a way around for get answer.

Ответить
@narangfamily7668
@narangfamily7668 - 03.06.2024 07:24

Super helpful!

Ответить
@emanaboubakr8708
@emanaboubakr8708 - 08.06.2024 20:30

i need help on how to install librosa library in python

Ответить
@BrianCarter
@BrianCarter - 10.06.2024 16:01

I can’t listen to this because of your background music.

Ответить
@preethipydipogu3892
@preethipydipogu3892 - 10.06.2024 18:44

Thank You so much sir this video is very helpful.

Ответить
@SA-oj3bo
@SA-oj3bo - 01.07.2024 16:16

Hi Rob, do you know how I can do similar things like YOLO does, but for audio? I am looking for a fast solution that tells me what sounds are recognized in a live audio-stream. Thanks in advance!

Ответить
@electron46
@electron46 - 02.08.2024 18:25

Your videos are excellent and I really appreciate them. I'm still trying to figure out why you feel the need to add the annoying music that interferes with your discussion.

Ответить
@cgyh68748
@cgyh68748 - 10.08.2024 16:05

Based and redpilled

Ответить
@chome4
@chome4 - 21.08.2024 20:03

Can you analyze audio to see if it's been edited?

Ответить
@carlo3252
@carlo3252 - 05.10.2024 09:28

can someone help me out with some scientific papers on the topic?

Ответить
@shanecorning5222
@shanecorning5222 - 21.10.2024 05:56

This is SO cool!!!!!! HOW freeking cool is THIS , sh$%t , right guys? .. .. .. :-D

Ответить
@000Andre00
@000Andre00 - 21.10.2024 21:36

Am currently doing an GAN project to generate audios this was really helpful, Thank you!!

Ответить
@JuliaNasca
@JuliaNasca - 18.11.2024 15:31

Hi, I am a student from Sweden working on my examination project, that is to preform an FFT on an audiofile and then make a 3D model using the results from said FFT. I was wondering if you had any pointers?

Thank you in advance.

Ответить
@Brentimus
@Brentimus - 24.11.2024 04:00

Ok so my daughter signed herself up for a science fair project where she wants to build a device to data log gunshots in different areas of restricted government land for managing poachers. We’d have to find a way to write a program to distinguish gunshots from other sounds. Can this be done in python?

Ответить
@JoyBatjargalGanbat
@JoyBatjargalGanbat - 10.01.2025 18:44

It's really helpful.
Can I use Jupyter Notebook for Audio Datas to follow your tutorial?

Ответить
@davidmam
@davidmam - 26.02.2025 17:59

That music in the background is really annoying.

Ответить
@sebastianscharnagl3173
@sebastianscharnagl3173 - 08.03.2025 18:53

Lol I get load and read always wrong also! Especially in PD 😄

Ответить
@cryptasen9263
@cryptasen9263 - 16.03.2025 08:38

I feel so stupid after watching this entire video.

Ответить
@animatedkidsstories-92
@animatedkidsstories-92 - 21.04.2025 13:30

can you tell me how can i handle audio in live chat audio application . where from frontend can send audio data continouse . so how can i handle it in fastapi backend?
i am using VAD but stiil it takes to much background noise.

Ответить
@leejanet1841
@leejanet1841 - 19.05.2025 11:06

could you upload ML projects related audio data like spotify might use in the future? You are like the only one I can learn from. Thank you so much for this video!!

Ответить