Комментарии:
Please like and share with friends/ colleagues if you find this 3.5 hour tutorial useful!
Also, I have uploaded all Fabric Notebooks to my GItHub (link in description above), to get the full learning experience, I recommend you:
1. Download the Notebooks and sample datasets.
2. To import a notebook into your Fabric Workspace, go to Workspace homepage, click on New > Import Notebook (at bottom of list).
3. Run the code in your notebook as you watch the specific tutorial, and explore extensions to the notebook, using the documentation (also linked in the GitHub readme file).
Enjoy!!! 😀
Really nice vedio mate. I have not watched the whole of it yet, but will do soon at one go.
ОтветитьI absolutely love your videos. Please keep making them.
ОтветитьFantastic video! 🤩 Please continue with this amazing work! 👌
ОтветитьYes please! :-) Please make a deep dive into machine learning and use of AI in Fabric-video! :-)
ОтветитьThis is really interesting! glad I found your channel 😊Keep making more or them, your videos are easy to understand, not very basic but not difficult just in the perfect spot 😊 Your videos helped me with my MS Fabric learning journey and more 😊 next time show us how to pass the Fabric exam certifications 😊
ОтветитьJust started using Fabrics, you are a savier, please do not stop doing this kind of content! Great Work!
ОтветитьAmazing. Thank you so much for this.
ОтветитьLove these videos, thank you
ОтветитьJust wanted to let you know you are doing a great job. Hopefully you will get recognized properly.
Ответитьwhen I do df.printSchema() and I get an error statinng "'DataFrame' object has no attribute 'printSchema'". Can you help with this?
ОтветитьGreat Job on this Series! Enjoyed watching it! Keep up the great work!!!
Ответитьcan we create and execute the stored procedure in notebook?
ОтветитьThank u will needahm, awesome content. Could u tell me where are now the learning materials for your tutorial? didn't find them on github
ОтветитьYou stopped your consulting job to focus 100% on teaching Fabric?
Wow, you're convinced about Fabric. Glad to hear!
Completed thru day 7, everything works well. At my age (62) I have given up trying to memorize coding syntax, downloading your "learning" notebooks allows me to build a library of coding syntax. This allow me to spend my time learning "what" I need to do, then using the library examples to figure out the "how."
Ответитьwhat a FANTASTIC vid... really enjoying your series, please keep them coming as you're putting out much higher quality content than anyone else ⭐
ОтветитьSo I need to learn Python before I start with Spark? or I could go easily with SQL?
ОтветитьAwesome tutorial - thanks!
ОтветитьGood stuff
ОтветитьGreat video
ОтветитьI am a new member, and I have spent less than 30 days working with Fabric Environment, I am into my third project. Yesterday I started re-listening to all 30 of your Fabric videos and completed them today. I was particularly looking for an answer on how to properly code this script to extract a table and write it to an external CSV file.
AttributeError Traceback (most recent call last)
Cell In[17], line 177
173 spark = SparkSession.builder.getOrCreate()
175 df=spark.sql("""select * from SrvySmryWithContract""")\
176 .show(n=10)
--> 177 df.coalesce(1).write.format("csv")\
178 .options(header='True', delimiter=',')\
179 .mode('overwrite')\
180 .save(path='Users/DWELLS/Downloads/SrvySmryWithContract.csv')
AttributeError: 'NoneType' object has no attribute 'coalesce'
That’s great, how about deploying spark jobs with Microsoft Fabric
ОтветитьHi, my name is Antonio. I'm from Mexico, and I really enjoyed your course. I'd like to know if you have more of these courses and if they cost anything extra. And the video is excellent. Congratulations! I hope you continue with this. I learned it in three days because of the urgency.
ОтветитьI don't get the benefits of replacing nulls with average or mean values. Imagine I want to predict sales price based on city, address etc., won't replacing nulls with mean value impact the prediction accuracy? What if we keep the nulls in Sales Price column? Will that impact the calculation of average or max sales price?
ОтветитьWhat's the difference between data2 = [("Jack",90000), ("Matthew",45400) ]
df=spark.createDataframe(data=data2, ["name","id"])
vs
data2 = [["Jack",90000], ["Matthew",45400]]
df=spark.createDataframe(data=data2, ["name","id"]) ? I remember in Python, the first case are tuples, meaning values can not be changed once it's created. Does it mean these columns can't be changed once the dataframe is created?
Hello and thank you about the content! I have a question: I have a pipeline which contains 5 Notebooks. My total running time is about 15 minutes. My main problem is that 9 minutes of the 15, is about starting the session. Which steps should I take to fix this issue?
Ответить