Hi everyone, curious if anyone can help me with this.
I have a table called Schedule that stores events. I also have another table called Items for Schedule, which can contain multiple rows per Schedule row — each row represents a specific item for that event.
For example, if I have an event on March 16 in the Schedule table, and I’m serving chicken, meat, fish, and dessert, those would appear as four separate rows in the Items for Schedule table.
This table also has four daughter tables (used for calculations, kitchen reports, etc.). Normally, when a user adds a new row to Items for Schedule, it automatically generates corresponding rows in all daughter tables with a shared unique ID so everything stays linked.
What I’m trying to figure out is this: how can a user go to a new event — say March 21 — and bulk copy all the items from the March 16 event? The goal is not to move or remove anything from the original event, but to create new rows tied to the new event ID.
At the same time, I also need the system to automatically create the matching rows in all daughter tables for each copied item, just like it does when a new item is added manually.
Hi, thank you for the suggestion — I’m going to give this a try.
I’m running into an issue with loops though. If there are 10 items and my loop limit is set to 25, it ends up creating 25 rows with duplicates instead of just processing the 10 existing items.
Any idea what might be causing that or how I can limit the loop so it only runs once per actual item rather than up to the maximum limit?
Try limiting the query with a 2nd filter ( example: & collumnx - is not empty ) and adding to the loop after to set the collumn inside the row as “sent” or anything u want. So the moment is goes back again, the query is going to be empty
This should not happen though. Limit should just bound the upper side of the loop. As long as you set it to loop through the relation/query, it will just run through the count of that list.
Can you share your configs through some screenshots?