I wrote an upsizing program that stores info in a table named DBFUPSIZE.DBF.
It stores the number of seconds it takes to upsize each table.
It becomes obvious that the number of records is not important, but instead the size of the DBF and FPT files.
I'm going to try to figure out a logarithmic calculation based on the data, but unfortunately, the DbfUpsize() function does not make calls to the logger during the actual data importing.
I can only determine the full amount of time (in seconds) for each table.
I am hoping that the PostGreSQL server can provide some of this information in it's own logs.
My goal is to give an estimate of how long it will take for any table to be upsized depending on it's size, so the user can be aware.
II want to figure out an equation to estimate the upsizing time of a complete set of tables before the user starts the process.
The Upsizing program I wrote gives a list of each table and allows the user to tag tables from the list to upsize.
Upsizing Analysis
Upsizing Analysis
The eXpress train is coming - and it has more cars.
Re: Upsizing Analysis
Great work!
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Re: Upsizing Analysis
I have been communicating with Alaska Software and they have shown an interest in solving some of my issues.
For now, I am hoping to get the source for DbfUpsize() because I believe that some "tweaking" could solve a few of my issues.
For example, much of the legacy data goes back more than a decade, but it must be available for access, when required.
We could re-design the original application to break the data into more tables, which we have already done on some large tables.
But, for now, I only need to determine the number of records, for each table, that are a practical maximum.
If I can upsize data based on the most current (LIFO) records then this will help a lot.
I want the ability, in the upsize file, to choose FIFO or LIFO, the number of records to upsize, and the starting record number.
Both applications I am evaluating have over 100 data files, yet only 3 or 4 of them appear to have an upsizing problem.
I determined that .FPT or .DBT size affects upsizing more than anything.
For now, I am hoping to get the source for DbfUpsize() because I believe that some "tweaking" could solve a few of my issues.
For example, much of the legacy data goes back more than a decade, but it must be available for access, when required.
We could re-design the original application to break the data into more tables, which we have already done on some large tables.
But, for now, I only need to determine the number of records, for each table, that are a practical maximum.
If I can upsize data based on the most current (LIFO) records then this will help a lot.
I want the ability, in the upsize file, to choose FIFO or LIFO, the number of records to upsize, and the starting record number.
Both applications I am evaluating have over 100 data files, yet only 3 or 4 of them appear to have an upsizing problem.
I determined that .FPT or .DBT size affects upsizing more than anything.
The eXpress train is coming - and it has more cars.
Re: Upsizing Analysis
I had a session about the PGDBE on our last german conference in April. I have reported my experiences and problems. The problems mainly concern complex filters and the poor performance of some maintenance functions (including DbfUpsize). I had some discussions with Alaska in preparation of the conference. They took my filter problems very serious and created the "remote filter" some weeks after the conference. And Steffen (as I remember) told me they didn't invest very much into the service functions, what is the reason for the poor performance. So, if you find a way to improve the upsizing, I would be willing to share the costs of that.
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Re: Upsizing Analysis
There is way.
Multiple simultaneous processes might be solution as this looks single - threaded - go from one to another and do upload job. So much of CPU and RAM resources are wasted as usual with xbase.
Do a dbfupsize config in individual sets for large DBF tables and one for everything small.
Then main.cmd with
start large1\upload1.cmd
start large2\upload2.cmd
start large3\upload3.cmd
start restdbf\uploadAllTheRest.cmd
This should be enough for significant speed up as long server can be saturated with INSERTs.
(This is method I use for data migration, but NOT WITH PGDBE, only ODBC)
On another take, it might not work at all as triggers cause so much writes that they saturate Iops of pgre server.
Analyzing code code should produce some resoults, at least some "WTF" too.
> For now, I am hoping to get the source for DbfUpsize() because I believe that some "tweaking" could solve a few of my issues.
Something tells me they keep data in local buffer too and do unecessary buffer updating from server too?
Multiple simultaneous processes might be solution as this looks single - threaded - go from one to another and do upload job. So much of CPU and RAM resources are wasted as usual with xbase.
Do a dbfupsize config in individual sets for large DBF tables and one for everything small.
Then main.cmd with
start large1\upload1.cmd
start large2\upload2.cmd
start large3\upload3.cmd
start restdbf\uploadAllTheRest.cmd
This should be enough for significant speed up as long server can be saturated with INSERTs.
(This is method I use for data migration, but NOT WITH PGDBE, only ODBC)
On another take, it might not work at all as triggers cause so much writes that they saturate Iops of pgre server.
Analyzing code code should produce some resoults, at least some "WTF" too.
> For now, I am hoping to get the source for DbfUpsize() because I believe that some "tweaking" could solve a few of my issues.
Something tells me they keep data in local buffer too and do unecessary buffer updating from server too?
rdonnay wrote: ↑Thu Nov 21, 2024 5:14 am I have been communicating with Alaska Software and they have shown an interest in solving some of my issues.
For now, I am hoping to get the source for DbfUpsize() because I believe that some "tweaking" could solve a few of my issues.
For example, much of the legacy data goes back more than a decade, but it must be available for access, when required.
We could re-design the original application to break the data into more tables, which we have already done on some large tables.
But, for now, I only need to determine the number of records, for each table, that are a practical maximum.
If I can upsize data based on the most current (LIFO) records then this will help a lot.
I want the ability, in the upsize file, to choose FIFO or LIFO, the number of records to upsize, and the starting record number.
Both applications I am evaluating have over 100 data files, yet only 3 or 4 of them appear to have an upsizing problem.
I determined that .FPT or .DBT size affects upsizing more than anything.
Re: Upsizing Analysis
Tom -
I am writing a Generic upsizing program name PgUpsize.Prg which call DbfUpsize().
I will make this program source available to anyone who needs it.
The goal is to save time when constantly upsizing data while analyzing the upsizing performance of an application.
So far, I haven't found a way to tell DbfUpsize() to cancel an upsizing that is in progress other than QUIT.
How do you do this?
This program utilizes a PgUpsize.Ini file to set up everything and a PgUpsize.dbf to store all the upsizing data.
The PgUpsize.dbf table compares the DBF timestamp with the UPSIZE timestamp
to save time when upsizing again and again. It won't reupsize a file that is current.
I am writing a Generic upsizing program name PgUpsize.Prg which call DbfUpsize().
I will make this program source available to anyone who needs it.
The goal is to save time when constantly upsizing data while analyzing the upsizing performance of an application.
So far, I haven't found a way to tell DbfUpsize() to cancel an upsizing that is in progress other than QUIT.
How do you do this?
This program utilizes a PgUpsize.Ini file to set up everything and a PgUpsize.dbf to store all the upsizing data.
Code: Select all
[UPSIZE]
DbeName=FOXCDX
StorageName=FOXDBE
OrderName=CDXDBE
MemoFileExt=.FPT
DataPath=C:\Medalion
ToolTitle=Medalion Upsizing Tool
UpsizeFile=Medalion.Upsize
ConnectionName=test
ServerName=localhost
UserID=postgres
Password=<password>
Database=Medalion
PgDbeLicenseKey=<license key>
PgDbeLicensee=<licensee>
to save time when upsizing again and again. It won't reupsize a file that is current.
Code: Select all
IF !Fexists('PgUpsize.Dbf')
aStru := { ;
{'TABLENAME','C',20,0}, ;
{'DBFDATE','D',8,0}, ;
{'DBFTIME','C',8,0}, ;
{'UPSIZEDATE','D',8,0}, ;
{'UPSIZETIME','C',8,0}, ;
{'DBFSIZE','N',10,0}, ;
{'CDXSIZE','N',10,0}, ;
{'MEMOSIZE','N',10,0}, ;
{'RECORDS','N',10,0}, ;
{'FIELDS','N',5,0}, ;
{'RECSIZE','N',6,0}, ;
{'UPSIZESEC','N',7,1}, ;
{'UPSIZEMIN','N',7,2}, ;
{'UPSIZEHRS','N',7,2}}
dbCreate('PGUPSIZE.DBF',aStru,M->cDbeName)
ENDIF
The eXpress train is coming - and it has more cars.
Re: Upsizing Analysis
Hi, Roger.
We don't allow the upsizing to be cancelled. Once it runs, the only way to stop it would be to kill the application. We decided to have this as a single step. The "cancel" button is disabled after the process started:How do you do this?
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Re: Upsizing Analysis
I agree that you would not do this when deploying the upsizing to a client.We don't allow the upsizing to be cancelled.
This tool was designed to evaluate and experiment when developing and analyzing the processes of upsizing.
Currently, I am analyzing 2 different client projects and I needed a tool that would work in both environments.
I think that I have evaluated one of the projects sufficiently to determine that the upsizing process will not be a problem.
However, that project is now giving me issues when running the application, for example it uses a lot of VPE (Virtual Print Engine) reports.
Some run quite well, others fail because of unsupported functions under PGDBE, like OrdKeyNo.
The eXpress train is coming - and it has more cars.
Re: Upsizing Analysis
This is the only function I really miss but I understand why having it with SQL is not a good idea. For everything else (which is not that much) I found solutions.others fail because of unsupported functions under PGDBE, like OrdKeyNo.
I remember Steffen promised to create a version of the upsizing tool that just creates a really, really big set of SQL statements (a dump) instead of talking to the server all the time. I wonder what happened to this plan. That would be an improvement.
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."