Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 7407 days old. Please don't post here. Open a new topic instead.

Recommended Posts

  • Newbies
Posted

Hi,

I'm a freelance geek, but I know damned little about databases. However, I do know that indexing is good where speedy search is concerned. Client's DB performance (server 3.0 on NT/client 4.0 on NT) was fine for 6-8 years and suddenly it's unusably slow. When client selects certain report options from the server, they get a status message that report is taking so long because it's sorting through unindexed records, then it lists the name of a field. When I look at that field, it's a dynamic calculation from the sum of other fields on multiple related DBs, which are dynamic calculations from other fields on multiple related DBs, and so on. There are 12 databases in total that all hook into one another a number of ways. Lots of fields are dynamically generated by an equation, not stored, and not indexed. However, most of the ones that aren't an equation are indexed. I've been sifting through the formulas trying to find others that can be indexed, and wondering if I shouldn't make it store some of them so they can be indexed. However, I'm afraid if I do that to the wrong one I'll introduce errors into the data.

My questions are these:

1) Is there's some behavior that's typical from these 3/4 environments. Maybe if you knew what typically happens with them you could point me to an easier way of fixing it.

2) If I tell the database to store a value that was arrived at via and equation, is FMP smart enough to update it if the factors that generated it change? Or am I introducing potential for errors in the data?

3) If the databases are all in shared mode, where formulas & field definitions supposedly can't be changed, how could this have changed? Why would a field suddenly decide it's not going to index itself? Why was the performance so good one day and suddenly slow the next?

4) If unindexed file is the reported problem, could it be a red herring that's masking some other issue? Is there a threshold that once you get X number of records, this happens? Is there something else I should be looking at?

Here's the hardware I've thrown at it before realizing that this truly was a DB issue:

NT server (Pentium 266mhz running NT workstation) is running 3.0 server and NT clients 4.0, all connected via TCP/IP. I moved server to one of the client's machines, as server hardware broke more every time I touched it, and upgaded their office hub to from 10Mb to fast ethernet. No more than 3 users on the DB at a time.

Thanks in advance,

Michael

This topic is 7407 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.