Good work rkaye,
I have been testing with files that have over 70,000 rows and about 250 columns. The thing I noticed is that in the older version it would become progressively slower and slower but with the amended code it keeps plowing through the records at pretty much the same speed as it is no longer doing the SQL select for each cell. However, it still takes quite some time to get through a big file so hopefully there may be some other similar areas that can be enhanced for speed.
PS> I also updated the code in the GETXMLSTRING method as I came across some data in my testing table that was invalid and it error'd out. I put in code to check for unreadable ASC() characters :
LPARAMETERS tcstring
LOCAL lcstring, lcxmlstring, lnndx, lcchar
IF LEFT(tcstring, 1) != " " .AND. EMPTY(tcstring)
RETURN ""
ENDIF
lcstring = STRTRAN(tcstring, CHR(38), '&')
lcstring = STRTRAN(tcstring, '&', '&')
lcstring = STRTRAN(lcstring, '>', '>')
lcstring = STRTRAN(lcstring, '<', '<')
lcstring = STRTRAN(lcstring, '"', '"')
IF THIS.CODEPAGE = 0
lcxmlstring = ""
FOR lnndx=1 TO LEN(lcstring)
lcchar = SUBSTR(lcstring, lnndx, 1)
lnchar = ASC(lcchar)
IF lnchar < 32 &&& BCH unreadable/unprintable characters can cause errors within Excel so reset to space
lnchar = 32
lcchar = ' '
ENDIF
lcxmlstring = lcxmlstring + IIF(lnchar < 128, lcchar, "&#" + TRANSFORM(lnchar) + ";")
ENDFOR
ELSE
lcxmlstring = lcstring
ENDIF
RETURN lcxmlstring
↧