Комментарии:
Thx, but how do we handle PDF files larger than 200 MB?
ОтветитьBro. You are providing so much value, and giving the download link, so easy to use.
THANKS. Crazy value provided to the world. I hope you sleep well knowing that. Keep it up!
Can this be done on your locally hosted N8N self host AI agent setup ?
ОтветитьYes please I certainly want a local hosted version 🤗
ОтветитьHi Cole, I just converted this entire workflow to local rag and without Supabase. It works with the Postgres PGVector Store instead, also had to changed the AI Agent prompt and enable "create extension vector" in postgres via its init.sql. I discover a dangling bug with the Ollama Chat Model node (under AI Agent) which gives an error: "Non-string content not supported" even when the input string is not empty. Have you seen that problem? Weird. A workaround was to use the AI Chat model instead while I troubleshoot that error. It all works great, though. Thanks much for your videos.
Ответитьi do not have the "connect" button on the self hostet version of supabase. but till now its working fine. just have all the time trouble with the "Default Data Loader". hope i will figure it out soon.
Other Question is your Ad "unstract" the goto extension for this workflow? The additional 8GB are currently holding me back from trying it out :)
Yes interested in local setup ! Meanwhile this is great work thanks Cole
ОтветитьCole, congratulations on the amazing video, excellent, thank you!
ОтветитьThanks. I will try to build a local one.
Ответитьlearning code is easy than learning n8n workflow. why people are promoting n8n that is totally paid, why not to choose the free one Make? make a video please
ОтветитьLove this channel! ☺ Are you aware of software that embeds audio, video and/or 3d meshes?
ОтветитьThank you , it is really Good. But it keeps hallucinate after a few question. I've tried several prompt to force it to only use source but sometimes it wants to add some information that are nowhere... If you have a way to fix this issue. Maybe add another LLM to analyse and compare with source the first answer (but it will take times and tokens...)
Thanks again let's go deeper :)
Thank you again awesome! Looking forward to the offline version and I have learned a lot from the comments ...so a big thank you to everyone! This is the n8n GOAT channel. chunking is my current pain point.
ОтветитьSuper impressive and useful system, thanks Cole :)
Ответить🙏🙏🙏
ОтветитьHi Cole, wow - this is awesome value!!!
So many people are hiding valuable how-to's and details behind paid subscriptions. I pray you may continue to bless us with more quality content like this!
If you use JSON or XML extractor, would you create a schema similar to what you did for CVS and Excel? Thanks!
ОтветитьGreat stuff, I loved it! I was looking for exactly that. I tried to use your project in my n8n. It works fine when I use chat message, but it does not work with webhook (evolutionapi). It returns an error on postgres chat memory: "input values have 3 keys. You must specify an input key or pass only 1 key as input." Any ideas on what went wrong? {
"nodes": [
{
"parameters": {
"promptType": "define",
"text": "={{ $json.chatInput }}",
"options": {
"systemMessage": "You are a personal assistant who helps answer questions from a corpus of documents. The documents are either text based (Txt, docs, extracted PDFs, etc.) or tabular data (CSVs or Excel documents).\n\nYou are given tools to perform RAG in the 'documents' table, look up the documents available in your knowledge base in the 'document_metadata' table, extract all the text from a given document, and query the tabular files with SQL in the 'document_rows' table.\n\nAlways start by performing RAG unless the question requires a SQL query for tabular data (fetching a sum, finding a max, something a RAG lookup would be unreliable for). If RAG doesn't help, then look at the documents that are available to you, find a few that you think would contain the answer, and then analyze those.\n\nAlways tell the user if you didn't find the answer. Don't make something up just to please them."
}
},
"id": "e3292840-8028-4aaf-bea5-7aae1f5bb69b",
"name": "RAG AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 1.6,
"position": [
-520,
140
]
}
],
"connections": {
"RAG AI Agent": {
"main": [
[]
]
}
},
"pinData": {},
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "285a2285000b16ddbe78c2cc327e8ef7e0579a5c287cc66b13bac6e8f31beb9e"
}
}
What do power users like me who fly through LLM prompts and docs all day should know what our cloud bills would be for N8n on something like this , because this is why I love using my local LLM on things like this because we don’t have to sit there and burn money all day would love for a local LLM and N8n version of this great work though
Ответитьsevgili cole her içerikte kendini daha da aşıyorsun...tebrikler
ОтветитьDo you code sql of creating metadata yourself, or you pasted code from some documentation? If you pasted, which one?
ОтветитьEstou falando do Brasil assistindo videos em outro idioma mais no Brasil so tem picareta querendo vender curso sem ensinar nada de ultil
ОтветитьExcellent RAG system. I have started implementing one recently and faced the shortfalls you have beautifully addressed with this agent. Thanks for sharing!
ОтветитьDo you think we can implement Mem0 like memory that remembers user preferences, adapts to individual needs, and continuously improves over time?
ОтветитьWhere can I find the template wf?
ОтветитьI get an error when I run it: Problem in node ‘Extract from CSV‘
This operation expects the node's input data to contain a binary file 'data', but none was found [item 0], even if I include a CSV file in the google drive.
Still waiting for a template on Anoma. I know it should be easy to build on because they’ve shifted from traditional transaction-based logic to a more powerful intent-based system, simplifying app development.
ОтветитьThank you so much for this, it is awesome. Do you have to set up the excel documents in a specific way, when I parse an excel document it changes my date to some weird number, then my Dynamic query don't work because it filters on the date. It does not do the same for csv files though but it uses a different "extract from" node. I have also encountered this issue when extracting complex PDf's, the "extract" node just does not get the parsing right. This workflow is very well though out and great to learn from.
Ответитьyou are Awesome as always Cole !!
ОтветитьCould you share how this kind of setup would work for using Airtable as the source for RAG? Can’t seem to find any good video on it
ОтветитьThis was similar to the solution we landed on. If we are using Postgres for all our structured data, and pgvector on Supabase for our vectors, then the hybrid approach gives so much flexibility. You can marry full SQL table columns with vector embedding queries as shown here slightly. Yes, it is hard to have the perfect function awaiting you for some queries, but if your agent knows your database schema well it can write you a query based on any needs you have. "Meetings with this participant in it", "Agents for this account", date based searches are obviously huge. Even the dates I make entries are constantly helping me narrow my searches. Thanks Cole!
ОтветитьThank you for this update. Yes, I'm interested in running all of these Docker containers and services locally first, with the intention of running it in the cloud in the future.
ОтветитьDoes it work with self-hosted n8n?
ОтветитьHey cole, do you edit your own videos or outsource? Nice editing
ОтветитьYes, please do a local version. Thank you!
Ответитьtop top top
ОтветитьWell I uploaded a pdf document, ran the workflow and it completed successfully. However there was nothing in the documents folder. Running both the cloud versions of n8n and Supabase. Any thoughts?
ОтветитьAwesome content, Cole! Just baffled that you are giving this out for free! Thanks so much! Would love to see the local version as well and if you could also include some or all of the suggestions made by @milutinke, that would be just superb! Just one mor questions: Is there a Buy Me a Coffee Button somewhere to sponsor your amazing work?
ОтветитьVery excited to dive into this one!
ОтветитьThank u very much, and I'm looking forward for the local version
ОтветитьGreat thing - thx. I am also very interested in the local version.
ОтветитьCole! Can't wait to see the complete local version. Thank you
ОтветитьAwesomw, can't wait to see the complete local version.
ОтветитьThanks !!!!
ОтветитьHello,do you offer services to install thia
ОтветитьThanks Cole for pointing out the problems with RAG. Would this workflow work for markdown files?
ОтветитьCool feature. This is neat! I do have a question though. If the RAG fails on a large file, does 'Get File Contents' just throw the whole thing into the context window?
ОтветитьIs unstract use for document classification
ОтветитьYes local setup will help
Please build VLOG for local