Jump to content
Server Maintenance This Week. ×

Building for WAN


This topic is 3326 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Hi there,

 

We're using a FM database through an externally hosted service (triple8) which is accessed from a number of locations across the country.

 

I currently have a situation where we are viewing a report that can contain 1200+ records in a filtered portal and each time the filtered is triggered it can take as long as 30 seconds for the screen to refresh and scrolling down the list is painfully slow.

 

I've tried a list view instead and performance for filtering is improved significantly but scrolling down the list is still slow (not as slow as the portal though) - the issue is viewing it in list view poses a whole heap coding challenges as we have a parent child setup (similar to an invoice - invoice line items setup)

 

We find we often have performance issues when viewing lists and portal - Particularly if they use related fields, contain large record sets eg. 1200+ records in portal or unstored calculations, or are filtered and sorted.

 

I understand that a fundamental concept of using a database is to avoid data duplication ie. using a primary key to view data from other tables.

 

Considering our situation I find that performance is significantly improved when I DO duplicate data ie. use a lookup or autoenter calc field to duplicate data that will be viewed in list views, portals etc. eg. duplicating product descriptions in invoice lines etc

 

I understand that doing this will increase file size considerably, so are there any other options to improve performance without bloating our database unnecessarily?

 

 

 

 

Link to comment
Share on other sites

I have trouble figuring out why you would ever have to present 1200+ records to the user... nobody can scroll through that list and remember what they are seeing or even visually find everything that they need.

 

So there seems to be a UX challenge here: provide the user with what they need without throwing 1200 records at them.  To me I would solve these types of issues first before messing with the data normalization.


As an aside: a product description is ok to be be duplicated on the invoice - that is data "at the time off".  If you ever change a product's description you don't want to have it affect all of your old invoices.

Link to comment
Share on other sites

Thanks for your input Wim. To put this into context:

It's a sales report that provides analysis on all product sales.

The workflow involves seeing a list of all product that have been sold between certain dates Then the user can filter the portal by supplier or product type etc, select certain products and create a purchase order.one product type may contain 30 or 300 product lines, same for supplier, and they want to switch between types, suppliers etc quickly.

Currently I'm building a table with all product records with sales between the set dates, then allowing the user to filter the found records.

I could cut the number of related records down, by only populating the related table based on the filter selection, the down side of that is that each time the user adjusts the filter I would have clear the existing related records and rebuild the related records ( using perform script on server) and that process takes around 60-90 seconds to run - which is longer than the time it takes for the portal filter to refresh.

Link to comment
Share on other sites

Currently I'm building a table with all product records with sales between the set dates, then allowing the user to filter the found records.

 

Then I would try and automate this as a nightly process - pre-aggregate the data so that you don't have to rebuild the related set everytime

Link to comment
Share on other sites

Hi Wim,

 

Thanks again for your input.

 

The challenge is that the user will choose a variable date range for the report. eg. start and end date - so the report can only be created on demand based on the users criteria.

 

I am considering adding more filtering in at the report building stage though eg. include product type, supplier etc when building the report - at least that way once they start scrolling through the portal it should load faster since there are less records.

 

I guess the challenge here is optimising for speed performance while considering file size expense and possibly looking at alternative user work flows.

 

Thanks for your input Wim.

Link to comment
Share on other sites

Hi Infoweb,

 

When writing static aggregate data, as Wim suggests, remember that you can also aggregate weeks or days.  So if there are 12,000 transactions on a month and 1,500 of them for a week then 1,500 records pre-aggregated is still better than aggregating 12,000.

 

Sometimes you must set some rules for the business.  It wouldn't make sense to begin a date range in the middle of the week (for example).  They will always ask for everything every way possible and NO program offers that kind of flexibility so be willing to tell them it will be faster if records can be restricted to week-ranges.  Most times, you will find that they are very receptive when it has been explained.  If not, document it in an email or Requirements document and let it be slow.  Then when they complain, you can show that you tried to warn them.  Be pro-active.

 

I could cut the number of related records down, by only populating the related table based on the filter selection, the down side of that is that each time the user adjusts the filter I would have clear the existing related records and rebuild the related records ( using perform script on server) and that process takes around 60-90 seconds to run - which is longer than the time it takes for the portal filter to refresh.

 

Why use portal filters?  1,200 records is NOT very large so slow-downs should not be a problem, even over WAN.  You can also use virtual technique for displaying ...

 

I would mention these things:

  1. What objects are on the layout which already require packet movement and fetching?
  2. What are the relationships and related records also being pre-fetched at same time?
  3. As Wim says, they can't see all 1,200 at a time anyway nor can they see ALL related records and ALL fields so lighten the load by considering moving some of the fields to 1:1 table.
  4. Use relational filters over portal filters when the related set can be over 50 records or so.
  5. Do not sort the relationship or portals.
  6. Run reports using PSOS.  Pass back the IDs and use virtual report for display.
  7. File size is less important than speed.
  8. These are the places where it is WELL WORTH the time and money to hire a professional developer to advise depending upon your SPECIFIC solution. 
  9. Check out FMBench - it can help you pin down bottlenecks in your solution. http://fmbench.com

Stop building on your solution until you can get on paper the specific final requirements.  If you do not, you might give them more than they need (which is resource-wasteful) or you may need to adjust your work (which costs them money and your time) or you will build sub-optimum because you will be adding on in bits and pieces (software-sprawl similar to urban-sprawl)  instead of evaluating the entire task as a whole.

Link to comment
Share on other sites

This topic is 3326 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.