What I’d like to do is store 1 million rows in BT but only pull a small amount of them into a glide table. Then I could do relations and rollups over computed columns.
In my current situation I use a filter in GSheet but because GSheet can’t store millions of rows I have to segment my app for each customer.
Yeah, I’ve seen your crazy setup
I think Big Query is probably the direction you need to be taking.
Then you can store all your data in one giant Big Query database, and each customer App can pull in the data for that App using a query.
Although - Big Query does have similar limitations as Big Tables. So that might be a problem. But I do know that Glide are working to remove some of those limitations, so in your case it might be best to wait a while and see how things evolve
It’s wayyyy less crazy now thanks to you and others in the community who have taught me best practices. I will take your advice again and wait to see how things evolve.
While I’m waiting I’ll have a go at BQ and educate myself further.
Big Tables is a native Glide Data source. It’s just like regular Glide Tables, except… bigger It is different in that instead of all rows being loaded immediately to the user device, rows are loaded as they are needed. It also has some limitations around data aggregation, which are documented.
Big Query is what Glide refer to as a “queryable data source”. Data is stored in Google Big Query, and your Glide Tables are defined and populated by queries that you write against that data.
but here we need to be efficient, if I create a helper table with 1000 rows and 50 columns (thinking in the worst case) and my payload returns most cases 20-40 rows, we are wasting data and load unnecessary info.
I think I get what you’re selling. I use a technique where I adapted the trebuchet method and the miracle method to save on rows in other areas of my app. Essentially I have a few columns which store csv’s and then I explode those rows by transposing them into a helper table.