Hello! I was wondering what is the best architecture to proceed with when trying to build a large scale inventory management system where product A can exist in different quantities across multiple warehouses.
The simple approach is to create a unique column for each unique warehouse in the product master itself yet i doubt this method’s scalability to go to 30-50 warehouses.
This method is also limiting in the sense that a user cannot create a new warehouse, only the editor can.
The really cool way that google sheets can solve this problem is with it’s use of ‘SUMIF’.
Okay, I’ll explain how i am displaying this to ther user.
I am building an ERP system and within that, there are several modules like sales orders, invoices and purchase.
Let’s focus on the sales order point of it. When I want to punch an order for product A, which has some amount of stock across warehouse 1,2 and 3, I will need to show the stock across all three warehouses, so that I can choose which is the best warehouse to deduct the product’s inventory from.
I have attached a link to a short video and a screenshot, below.
My problem is that I would not know how to allow any user to add more warehouses to the ERP without contacting me to add more columns. I also do not know, how this kind of architecture will scale because what would I do if a client has 25 warehouses across the country?
I should add that although I didn’t show it in the video, it would be quite a simple matter to extend that so that the appropriate Warehouse is automatically selected once a Product selection is made. ie. the Warehouse with the most stock.
Hey Darren, I appreciate you helping me out with the video
I saw your video and there were somethings that really helped me out yet I am not being able to apply them because the way my project’s inventory architecture is built, i am unable to use the method you’ve shown.
My project, Athena is a designed to be a full stack ERP system.
Within my products’s inventory, there is a also a need to calculate the potential production. The potential production is a calculated field that allows you to view how many pieces of an assembled product, you can create from its components (Like a BOM system). Therefore, auto-choosing the warehouse becomes very difficult as the potential production is calculated by each component’s total finished stock (not by each warehouse’s total finished stock).
The second issue is that if I create a table that links the warehouses and products by ID, wherein for product ‘A’ and warehouse ‘1’,‘2’, and ‘3’, we’d create three distinct rows: Product A, linked with each warehouse per row.
In this scenario, to update product A’s inventory at warehouse 1, a simple action is to be created where it will increment the cell of that row in the quantity column to get us the new modified inventory. Example: product A stock in warehouse 1 = 100 pcs. Sales order for 20 pieces → Increment row with product a and warehouse 1 by -20.
The issue occurs because of these 2 reasons:
Glide Big Tables cannot update in real time
Even if real time updating was possible, if you and I were trying to increment for the same row at the same time, we would be entering a race condition where only one of our increment actions would take effect.
Okay, thanks for that, and the video. That really helps to explain your situation.
You are correct in what you say about potential race conditions, and there really is no pure Glide solution to that. The only suggestion I could offer to work around that would be to use a combination of webhook + Make + Glide API. In this way, the requests would be queued and handled correctly in sequence.
With regards to the Big Tables updating, again you are somewhat correct. Changes made by a user should be instantly reflected for that user, but are not visible to other users without a refresh. Note that a “refresh” in this context doesn’t mean a refresh action or a full screen refresh, it could be simply navigating to a page, or paging through a collection.
Anyway, I’d encourage you to give the webhook/Make/API option a try at least. I’ve been doing my own work with Big Tables recently, and for various reasons I have one flow where data is entered into a custom form, and then a new row is added not with an add row action, but rather via that webhook/Make/API combination. In the testing I’ve been doing, I see the new row appear consistently within 1-2 seconds, which is good enough for my use case.
Just getting back to your stock update scenario, the one situation where I see that could break down is one where you could potentially wind up with a negative stock count. For example, you have a stock count of 100 for Item X, and two users submit an order for 100 at the same time. Both requests would be processed by Make, and you would wind up with a stock count of -100. Although I think you could work around that by extending the Make scenario to first check the current stock level before adjusting, and if the current count is insufficient you could have a branch that invalidates the order. (The new Filtered Query method with Big Tables would make that a bit easier).
Anyway, concurrency and race conditions are always a potential issue with busy Glide Apps, and I’m afraid that there are no easy solutions that I know of.
Your take on using make as an API medium to avoid concurrency sounds great though I would be trading the add row action (1 update) with Trigger web-hook + set row values to glide by api (2 updates).
I would be doubling my cost there yet I see that simply setting inventory values by Make and the Glide set rows values API may be advantageous in terms of costs as Glide will not have to keep re-querying my inventory for each product by rolling up rows in the product activity master.
The only way I think I could make a decision is by seeing Glide’s Big Table Query Costs.
I really appreciate you taking the time to help me out. It’s good to have a conversation with someone at this level of technicality because I feel like I’m not knee deep in my project all alone
Also, if and when you can spare the time, could you please show me how to use the Glide Big Tables filtered query API as well as the normal Get Rows API using the continuation clause to help read all the rows until the max rows have been read?
It will be of great help to me and the ERP because I’ll be able to transport data across different services including sending product stock data to my website at regular intervals.