Infragistics UltraGrid and Security/Localisation


Author
Message
Trent Taylor
Trent Taylor
StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 7K
As I mentioned in an earlier post, I have had problems with another system that simply can't handle a large no of rows. True, even 10,000 records can be considered large but it should be possible to handle it.

Yeah, you should be good here.  We don't attempt to update all records, only records that have changed.  So this part won't be an issue.

Trent Taylor
Trent Taylor
StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 7K
You can get that version here: http://forum.strataframe.net/Topic15184-22-1.aspx
Aaron Young
Aaron Young
Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)
Group: StrataFrame Users
Posts: 277, Visits: 1.1K
I can see why I couldn't find the Infragistics wrapper - it is looking for 7.3 and I have 8.1 installed.

Nevermind, at least I know where to look.

Aaron Young
Aaron Young
Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)
Group: StrataFrame Users
Posts: 277, Visits: 1.1K
I would agree that I would never intend to have anywhere near a million records let alone 125 million in a BO. I was simply interested to find out if 1 million was the physical limit as indicated in the help file. My reasoning is if SF can handle a BO with a million records then it will handle one with 10,000+.

As I mentioned in an earlier post, I have had problems with another system that simply can't handle a large no of rows. True, even 10,000 records can be considered large but it should be possible to handle it.

Thanks guys.

Trent Taylor
Trent Taylor
StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 7K
Yeah, everything that Greg said is very true.  I think that it pretty much re-emphasizes what I was saying but from just another perspective Smile
Greg McGuffey
Greg McGuffey
Strategic Support Team Member (4.8K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
I missed that there was any size limit on the number of records that a BO could handle (independent of hardware/network resources). I'll be interested to see if there is a limit.



Of course, network and hardware would likely be a limiting factor even if the BO can handle an unlimited number of records. 125 million records would require some serious memory and/or network connectivity (if you try to bring them all into the BO on a client machine....125 million records on an appropriately sized "modern" db server is very doable/done all the time). And then there is the issue of what in the world world would a user do with 125 million records available on their machine? If they browse a records every second, it would take something like 4 years to go through them (assuming no sleeping, eating, biology breaks at all).



If all 125 million records were all needed to perform some business logic, then I'd try to use SQL first. Or I'd try a server solution that sat very near the db (which might use a SF BO...if there aren't any limits). I know the SF guys use CLR sprocs to do heavy lifting like this, with great results (SQL 2005+).



Just some thoughts. I was intrigued by the post and thought I'd share them.
Trent Taylor
Trent Taylor
StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 7K
You would more than likely never make it to a million.  This is by design of a disconnected data set.  There is never any reason to ever have that many records on the client side.  When you deal with this many records, you would be creating some type of calculation or update or potentially crunching massive amounts of data.  When this is done, you would create a sproc or something along those lines that is called from the client and then processed on the server.  The limitation is a moving line since it would come down to memory, etc.
Aaron Young
Aaron Young
Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)
Group: StrataFrame Users
Posts: 277, Visits: 1.1K
It is a plus that you have your own application - particularly an application that manages large record sets. A number of "odd" barriers that I have previously hit appear to be covered by SF so I guess you have been there before Smile

I notice in the help file that 1 million is often referred to as the potential limit for the no of rows in a business object. While I would never intend to fill a BO with anywhere near this no of rows, is this the actual physical limit? Our largest customer database is around 125 million records so I just want to see if a customer should ever wisely or unwisely ask for an add-on with a BO with a huge no of rows that we have the option of delivering it. Just interested Smile

Aaron

Trent Taylor
Trent Taylor
StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)StrataFrame Developer (14K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 7K
Yeah, I understand how it goes in that arena.  We didn't just wake up and decide to be a framework company one day.  We actually purchased a number of our competitors and tried to use them for our medical software...and we ran into a number of scenarios like this and finally the one that broke the camel's back was when we saved, the and we were told it saved, but the records didn't really get saved.  So we then decided that we would have to write our own framework...it took 2 fulls years and a serious commitment, but it has totally been worth it. 

That is the beauty of StrataFrame is that we use it ourselves for our medical software.  And we too deal with very large databases and record sets dealing in the hundreds of thousands and in some cases millions of records.  So we understand the importance of speed, reliability, and scalability!

Aaron Young
Aaron Young
Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)Advanced StrataFrame User (569 reputation)
Group: StrataFrame Users
Posts: 277, Visits: 1.1K
Yes, you are exceeding expectations and my expectations were high after reading your entire website!

For your information (and so you can have a laugh), I am here because I reported a bug for a competing system two months ago. I identified one of their routines that took 130 seconds to save a single record when equivalent C# .NET took 6-7 milliseconds (and yes and I do mean 130 seconds as opposed to 6-7 milliseconds - it is not a typo!). I had a dataset that contained around 13000 parent records and a similar no of child records. Admittedly a large dataset but only one record was modified before I did a save. Virtually the whole 130 seconds was spent by their custom code checking through the rows looking for modified records to save. The same code exists in .NET and I timed it on the same operation and it only took 6-7 milliseconds. After two months I am still waiting to hear if they believe this is acceptable performance and whether they are going to fix it. My only reply so far was that they had to check the child records too which accounted for the time - the point they missed was that the standard .NET code did exactly the same in a fraction of the time.

Of course, this is the same company whose process for saving to a remote server was as follows (don't laugh):

1. On the client, you have a dataset containing 10000 rows. You modify a single row only and call their save routine.

2. The client sends the entire 10000 rows to the remote server in an uncompressed state.

3. The server scans through the 10000 rows, checking for modified rows only to find the one modified row which is then saved. The other 9999 rows are effectively disgarded.

4. To cap it all, the server then returns the entire 10000 rows back to the client (again uncompressed) and this replaces the 10000 row dataset that was originally sent to the remote server from the client in the first place.

An interesting way of stressing the weakest link. To be fair, they fixed that in their latest release. Unfortunately, that is when the 130 second bug was introduced.

Anyway, to be honest, don't care anymore. Performance and design like that I can do without Smile

Aaron

GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search