LaRetta Posted May 23, 2008 Posted May 23, 2008 Here is what I have: I have a data file which has a single field per DataID (FileDate). Users will change this date sometimes. I also have trigger file which views what is going on and fires emails as needed. This trigger file needs to see that FileDate has changed for a specific DataID and if so, fire another email. So in theory, if trigger info is different than data info, trigger should see it in a relationship. Currently I have a relationship which is based upon: trigger::DataID = Data::DataID AND trigger::FileDate = Data::FileDate From trigger, I want to have a portal (or relationship) to all records in DATA that doesn't meet that criteria. Within DATA, the DataID will never be blank but the FileDate may. If the FileDate is blank, I don't want that included in the new relationship either. I'm attaching a file. I cannot think how to handle this via relationship (I'm relationally challenged) and I've resorted to GTRR (see script in file). However, I expect the number of records to grow to 100,000+ and I believe my script will become slow over time. One other caveat ... I cannot add any fields in the data table. This should be handled either all in trigger table or another table (within trigger file). I should say that data is a separate file but I included it in same file for simplicity. And the reason I have the redundant field in trigger (FileDate) is because it must be static for history purposes (document filed with the state and the fileDate within Data will change). Ideas of relating to the non-related would be appreciated. LaRetta NonRelate.zip
comment Posted May 23, 2008 Posted May 23, 2008 How about: trigger::DataID = Data::DataID AND trigger::FileDate [color:red]≠ Data::FileDate
LaRetta Posted May 23, 2008 Author Posted May 23, 2008 Hi Michael, thanks for helping ... This would work if the trigger table had ALL DataIDs in it but it doesn't. I would need to know those DataIDs that aren't in trigger (and have a FileDate) as well. It doesn't even have to be the trigger table it relates to - just any table which can see everything except those records which match between Trigger and Data. It seems I need a cartesian which can also unmatch those that match currently. I have a feeling I'm in trouble here.
comment Posted May 23, 2008 Posted May 23, 2008 I am still confused here. No, actually I am even more confused... Do you mean you want to see all data records that do NOT have a matching record in Trigger (matching by ID), as well as data records that DO have a matching record, but the date is different? Also, am I correct in assuming that the relationship needs to return the SAME related set from ANY record in Trigger?
LaRetta Posted May 23, 2008 Author Posted May 23, 2008 (edited) Do you mean you want to see all data records that do NOT have a matching record in Trigger (matching by ID), as well as data records that DO have a matching record, but the date is different? Yes to 'does not have matching record in Trigger' IF there is a date and yes if the date is different. Also, am I correct in assuming that the relationship needs to return the SAME related set from ANY record in Trigger? I'm unsure what you mean, Michael. If an exact match of DataID and FileDate exists in Trigger, I don't want it. If a DataID exists in Data and has a ANY date and that date doesn't match DataID in Trigger (or there is no matching DataID in Trigger) then I want to create a record in Trigger to match it. I believe you are asking about the parent here and if so, yes, any record in Trigger should show the same found set (any records in Data which don't match both DataID and date) in trigger. UPDATE: I know it's a bit confusing: If there is a relationship which matches DataID to DataID AND date to date then I don't want those Data records. But I want all other data records except those without a date in Data at all. Edited May 23, 2008 by Guest Added update
comment Posted May 23, 2008 Posted May 23, 2008 Also, am I correct in assuming that the relationship needs to return the SAME related set from ANY record in Trigger? I'm unsure what you mean, Michael. I mean that this relationship must consider all the records in the Trigger table. Therefore, it's not a question of which records are related to a specific record in Trigger, but which records are related (or not related) to ANY record in Trigger. So it doesn't matter which record in Trigger is the current record - the related set in Data will be the same for all of them. If a DataID exists in Data and has a ANY date and that date doesn't match DataID in Trigger (or there is no matching DataID in Trigger) then I want to create a record in Trigger to match it. What is the purpose of this relationship, then? It seems that a find, or an import with matching on the two fields, should be more efficient here. It's not that it's not possible to construct such relationship, but it will need to get the keys via other relationships (or custom functions), so it won't be quick. It would also be helpful to know the sizes. I get the feeling that the data grows and trigger needs to keep up with it. If you need to deal with the full sets each time, it's going to get ugly pretty soon.
SurferNate Posted May 25, 2008 Posted May 25, 2008 Here is what I have: I have a data file which has a single field per DataID (FileDate). Users will change this date sometimes. I also have trigger file which views what is going on and fires emails as needed. This trigger file needs to see that FileDate has changed for a specific DataID and if so, fire another email. So in theory, if trigger info is different than data info, trigger should see it in a relationship. LaRetta Am I missing the point in thinking that Data must be manipulated from a Trigger layout? Otherwise Trigger will not "see" any changes made to Data, not in such a way that it can instantly evaluate and fire a script. In fact, the Data changes must actually originate in Trigger, if I understand FM logic correctly. P.S. - Several years ago you had a very neat trick for auto entering notes and logging them. FM 8 broke this and I figured out a new way to do it with a separate Journal table and a recursive CF to pull the data back into the NoteLog format. Ummm..I digress, I think I might have one way to do what I think you want...working on your file now...
LaRetta Posted May 25, 2008 Author Posted May 25, 2008 Hi Michael, my apology for the length of this. I have revamped it over and over to get the size down. You are spot on - it is truly such that, if there is match between Trigger and Data ( based on both DataID and FileDate ) then that shouldn't show in a portal to another single-record table. I am using Trigger as the parent mostly for simplicity but, as you say, "So it doesn't matter which record in Trigger is the current record - the related set in Data will be the same for all of them." The Trigger table is an eMail table. Trigger ( eMail ) handles many notices but it is based upon a record being created in eMail and including a future date to send. It remains in drafts until it pops into the portal ReadyToSend which is based upon date-only :less: relationship ( and not already sent ) to current date. eMails are sent ( if they appear in this relationship ) hourly. This runs on its own. I do NOT flag records in data but rather have Data create a record in Trigger when it needs something done ( there has always been script running in Data and it has been easy to also include a new eMail record at same time ). In this instance however, no script runs and event trigger would be tricky. The Data file contains one record per project which is a report to the state. The next report file date varies project to project. There is a dropdown date on the field NextFileDate. I have been asked to watch this field and, if that date changes for any specific project, generate an eMail three weeks before it is due to notify the Project Manager to complete the report. The only place this NextFileDate exists is in the prior eMail sent. Also, Business does not want to change Data to accommodate ( it is a different business program entirely ). It is one thing to add my sub-script into their script ( eMail is an add-on module ) but Data's functionality and UI shouldn't be changed to accomodate. Some Businesses will not purchase the eMail module. If they do not, it will be simple to remove my sub-script from within their scripts ( where indicated ). Some projects will only have the original report and no further ones; or it may be three years before the next one is even scheduled. So the NextFileDate may be blank. Number of records: Trigger records ( up to 150,000 before archive ); Data ( up to 1,000 before archive ). Archive means a Project is purged from ALL tables and files ( along with all of its children ). The Trigger table holds auto-generated emails and User-created emails for the entire system so only ( approx ) 800 of the Trigger records even relate to this part of the process. I just thought a NEW relationship, relating to ALL records other than those who match in the first relationship, would be possible. Once I have these records ( if they have a date and don't match the original relationship ), I will be creating records in Trigger so they can be tracked and auto-sent when time. This is as concise as I can make it but still give you what I think is necessary to see the picture. I appreciate you taking the time to even read it. LaRetta
LaRetta Posted May 25, 2008 Author Posted May 25, 2008 Hi Nate! Am I missing the point in thinking that Data must be manipulated from a Trigger layout? I believe so. I do not want to flag or mark Data in any way. In fact, after reading what Comment and Soren and others say repeatedly, it shouldn't be necessary when you can dynamically know the answers. I am playing a hunch here; obviously I don't know the real answer or I wouldn't be posting. Otherwise Trigger will not "see" any changes made to Data, not in such a way that it can instantly evaluate and fire a script. In fact, the Data changes must actually originate in Trigger, if I understand FM logic correctly. Well, Trigger can SEE changes made to Data - via unstored calculation or relationship. The evaluation ( when Trigger takes action ) won't be instant but rather hourly ( and actually daily in some processes ). This all runs independently and fine ... if there is an eMail message in Drafts which has a SendDate, it will be sent according to the Rule# associated with it because it will appear in the ReadyToSend relationship. In the past, I've created quite a few relationships based upon various criteria which tell me what I need to know without flagging or marking the child data. But this is the first time that [color:green]I want to relate to all records which DON'T relate as a match in ANOTHER relationship without flagging or marking the child records or even touching the Data file at all. I realize, in my usual way, my explanation diverts one's mind from the pure logic and facts of it. But it is there ... in fact, I'll place it in green. Thank you for joining the conversation. I appreciate your ideas a great deal!! LaRetta
LaRetta Posted May 25, 2008 Author Posted May 25, 2008 BTW, I should also clarify that, when I say Data file, I am talking about only one table in one file; the table which contains these reports. The data file itself is very large.
comment Posted May 25, 2008 Posted May 25, 2008 So far I have only skimmed this, but I don't see an answer to this: What is the purpose of this relationship, then? It seems that a find, or an import with matching on the two fields, should be more efficient here.
LaRetta Posted May 25, 2008 Author Posted May 25, 2008 (edited) You are right ... I didn't specifically answer that question but I thought I clearly explained the purpose of the relationship and my file is an example of the 'find' approach. I assumed ( wrongly ) that, by giving you more information, the answers to that portion of your question would become clear. I apologize that it was not. It is also quite possible that I simply don't know what you envision for 'a find or import.' So I won't dismiss your suggestion at all. An import matching on the two fields will import those Data records with a blank date. I suppose I could then delete them back out. But I don't want to import if they match ... it would leave those records that are unchanged within my resultant found set which I will then have to eliminate again ( based upon Update and Add To ). As for a find, it is no different than my example file which is GTRR on match to both fields; show omitted, the constrain to those with dates and then importing that set into trigger or actually I prefer writing directly through using Set Field[]. I am open to ALL methods, Michael. But everything I've tried feels clunky. If a relationship can work then that would be simplest, no? Hourly or daily, my process will see if there is any related ( If [ not IsEmpty ( newDataRelationship::DataID ] ) ... and fire if there is. Otherwise I will have to perform a find and return no records [color:green]just to know there are no new records. So which is fastest? Maybe all ways lead down the clunky road (considering the constraints on this process) and maybe a relationship alone can't be used. It just seemed worth trying and I have been to no avail. Edited May 25, 2008 by Guest Added sentence
SurferNate Posted May 25, 2008 Posted May 25, 2008 Well, I can see why, personally on one occasion I really really wanted to see records in a portal that otherwise had no natural relationship. That doesn't make it the best way, but it might still be the desired way at the moment. I solved the question. File attached. It took two Recursive CF's and five more TO's for me to do it, I think, the way LaRetta was asking for the result. Tell me if this is actually correct and what you wanted. I can explain and post the CF's too, but they're fairly specific to this example file. You may want to adjust them to your needs. NonRelate_Copy.fp7.zip
comment Posted May 26, 2008 Posted May 26, 2008 I believe this may be the most significant piece of the puzzle: Trigger records ( up to 150,000 before archive ); Data ( up to 1,000 before archive ) Wow - that's some difference. I mean, any way you look at it, it needs to check unstored data across the entire set. So going over 1k records vs. 150k... Naturally, I'd be looking for a way to do this from the point-of-view of the Data table. If we could have a calc field in Data = Case ( Date and ( not Trigger::DataID or Trigger::DataID and Date ≠ Trigger::Date ) ; DataID ) we could collect these IDs via List() over an x relationship to Data, and have an instant relationship to the records you want. Or we could just do a find on this field. I don't see what advantage GTRR has over a find in this situation - it's not like you need to look at the portal, and the action is scripted anyway. But I understand you don't want to add a calc field to the Data table. So how about just looping through the Data records, checking the above condition, and if it's not met, omit the record? Or, if you prefer writing directly through using Set Field[] (why?) you could just do that within the same loop. A variation on the same method: instead of omitting, put the eligible ID's in a global, then use the global as the matchfield in the relationship you seek. I'm not sure it would be any faster, but it might be worth a test. It's too bad you cannot add a global to the Data table itself - because then you could use Replace Field Contents on it instead of looping (as suggested by Agnes). I'll mention a couple of other options as well, because with 150k records you can GUESS - but you can never really KNOW which is fastest until you try. • Do what you've started to do, i.e. GTRR[FS], show omitted and constrain on * in Date. I don't think it's clunky at all, on the contrary - it seems to me the most straightforward method here. Certainly more straightforward than trying to build a list of eligible Data ID's from the Trigger table. Such list would have to be build on the basis of the existing relationship AND a few new ones, so it would be just adding complexity. The only issue is speed - but if it holds up, that would be my preferred method. • I still think import is an option worth investigating. I would have to play with it to find the exact configuration, but I believe a combination of matching and validation could import the correct records straight away, with no need to clean up afterwards.
SurferNate Posted May 26, 2008 Posted May 26, 2008 Certainly more straightforward than trying to build a list of eligible Data ID's from the Trigger table. Such list would have to be build on the basis of the existing relationship AND a few new ones, so it would be just adding complexity. The only issue is speed - but if it holds up, that would be my preferred method. For my own education, I really do care whether my "messy" solution or the "clean" solution with GTRR is faster on the large data set. I think am coming up against the large data-set/speed barrier in a couple of my own solutions, and want some background info to make better structural decisions in the future. Now that I understand recursive CF's a little and can actually write them, I'm wondering if I should reconsider the Summarize Granchildren solution too. I just don't know enough yet about RCF-vs-relationship-vs-scripting, as far as speed of results is concerned...
comment Posted May 26, 2008 Posted May 26, 2008 I don't think I can formulate any general rules - each case needs to be considered on its own merits. And as I said, you never really know until you test. Or at least I don't feel I have enough experience to be able to predict with confidence which method will be faster. However, you need to be aware of the limits of recursive custom functions. A custom function will compute a maximum of 50,000 recursive calls, and even that only if the function is tail-recursive - otherwise the limit drops down to 10,000 calls.
SurferNate Posted May 26, 2008 Posted May 26, 2008 (edited) Well, I think I did find out. I tried adding 150,000 records and GTRR works, with about a 60 second pause the first time and a 5 second pause after the first run. My RCF version sputtered and gave up outright, with the sheer amount of data. Can't dig through that mountain, gotta go around it. And so I learned a couple new things. P.S. - I also see what you mean about specific cases being different. I was just noting that in this case the size of the data set alone basically excludes the use of recursion to solve the problem. Edited May 26, 2008 by Guest
SurferNate Posted May 26, 2008 Posted May 26, 2008 So, in thinking about this, is this more of a case where a transactional approach is best? I mean, should the Triggers file be transactional in relation to the Data file? That of course assumes that my very limited understanding of the difference between relational and transactional is even applicable here. That's what you mean by suggesting a Find/Import/Export script?
SurferNate Posted May 26, 2008 Posted May 26, 2008 Okay, I found the near limit on recursion in this case, it works, relatively smoothly, at ~5000 records. There's still a lag, but about 2 seconds at most. Still it's not capable of handling a large enough data set to be practical in any real sense...
comment Posted May 26, 2008 Posted May 26, 2008 the difference between relational and transactional I am not sure what "transactional" means in this context, and what difference (if any) is there between 'relational' and 'transactional'.
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 Wow. A lot of information to absorb. Thank you both very much. I downloaded Nate's file last night and plan to run tests today. Keep in mind the 150,000 is a guess and is the upper limit before archive (but again a pretty wild guess). I simply cannot predict the volume of email messages because each business which uses it may use it differently. I am simply planning contingency here. The table on the data side seems like a pretty safe (and max) guess. The other issue is this ... I would rather not have a lot of TO's attached or added complexity when this will 'probably' be the only time situation in which it will be used. Usually a script will be running which can slide my sub-script in with no problem. About importing ... however I choose to go, I will then need to add that Data found set to the eMail Trigger table. Why Set Field[] over import? If new records in Trigger are created based upon a match in a relationship (with Allow Creation on) then there is no possibility of adding the same records twice. An import, if it fails in the middle for any reason and it's purpose is to ADD records, can leave some records not added. Then when fired again, it can duplicate the first several records. I find it much more untrustworthy and requiring a count of found set before, a count of resultant imported set after and possibly deleting found set and re-running and so forth. Set Field[] with Allow Creation can NEVER duplicate a record. By using a perm connection between the tables (the relationship) one can rerun the same script again and it won't duplicate. Now ... this is my personal experience only and I am always open to being shown the light (or consider alternate views). Also, Set Field[]/Allow Creation can easily throw an error which is trapped if it doesn't set (because of record locking) and the same script can be ran again without fear. I am not so sure that Import with Update/Add To can offer such protection. Again, all personal perceptions and probably my inability to script perfectly so feel more than free to straighten me out. I shall spend my day testing all of the great concepts! It will be a fun day! LaRetta
comment Posted May 26, 2008 Posted May 26, 2008 A quick question, if you will: If the DataID matches, but the date is different - will you be creating a new record in Trigger, or update the existing one? IIUC, you will always add a new record (meaning DataID is not unique in Trigger), but I want to be sure.
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 If the DataID matches, but the date is different - will you be creating a new record in Trigger, or update the existing one? We will create a new one. The old one (prior email) will still exist because that resides as portal on the Project record to show the Project Manager was notified (and what date it was due last time). IIUC, you will always add a new record (meaning DataID is not unique in Trigger), but I want to be sure. I will always add a new record in Trigger if DataID and Date both don't already exist in combination. If DataID doesn't exist and date is blank, it shouldn't be added either.
comment Posted May 26, 2008 Posted May 26, 2008 Well, then it's rather easy - see attached. The question is again is it fast enough. Notes: The names do not play any part - they're there just for visual confirmation. Only SourceID and Value matter (I used a number instead of date). Import the initial set, then play with the source: modify values, fill empty values, etc., and see what gets imported on the next import. RestrictImport.fp7.zip
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 I just don't know enough yet about RCF-vs-relationship-vs-sc ripting, as far as speed of results is concerned... Hey, Nate, I totally understand. I have approx. 45 speed tests on my list and just can't find the time to perform them across network to analyze packet transers, record locking and server-side vs client-side resources. Not only that, but version to version changes the speed tests I *have* done so tests should be re-ran with each version/configuration! Ah, if only I was free - it is a passion I would do exclusively if I could. :yep:
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 Alright. This goes against logic (at least in MY mind). Your import is ADDING only and it won't duplicate existing records - I tried! I even changed Alpha's value to 300 and it pulled it in. Only using ADD records?? I feel delightfully dizzy! Of course it's the validation which stops same records from coming in, right? An import can do this?? Okay, if you'd like to provide a few words of explanation, I'd sure appreciate it but regardless I'll figure this puppy out! This ROCKS!! And I'm so glad you've shown me this (for this problem as well as many future uses)! I shall get to testing now!!
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 (edited) Dang that's beautiful! Of all things I've learned within at least 6 months, this rocks the most!! Okay, there are many wonderful things I've learned within these past 6 months but this is VERY POWERFUL and pretty!! I thought ADD only on import would always pull everything in!! You are showing everything in Source (all records) but it's NOT pulling them all in! I hope everyone reading this post sees the power of this!!! Okay, okay, I'll breathe now... Edited May 26, 2008 by Guest
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 It's too bad you cannot add a global to the Data table itself - because then you could use Replace Field Contents on it instead of looping (as suggested by Agnes). Michael? Can you tell me where I might find the reference to this? It is probably something I have in my 'study now' folder but I've been a bit behind. :crazy2:
David Jondreau Posted May 26, 2008 Posted May 26, 2008 Whoa. I hadn't known this either. If the import of a specific record fails the validation for a target feature, the import of that record fails. It's a powerful feature, which of course, can be very useful or very damaging. Thanks Michael and LaRetta.
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 It's a powerful feature, which of course, can be very useful or very damaging. Indeed! I believe it's the 'Always' validate vs. Only during Data Entry portion which controls it. And, if Error Capture were on, it would probably import those records (bypassing validation) ... but I haven't had time to test all combinations so these are only guesses right now. Regardless, yes, it can also bite if not understood! It tickles me!
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 And, if Error Capture were on, it would probably import those records (bypassing validation) Wrong. It still won't import duplicate records (go against validation). I will quit speculating (it's a curse of mine). :smirk:
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 (edited) Well, Michael, I'm sitting here in a bit of a shock. I don't believe I have anything further to test. I created 1500 records in Data ( in your file ). Then I added a timer ( Set Variable $start Get ( CurrentTimestamp ) ) at beginning and $end at end. The entire process took [color:blue]1 second to run. Everyone should keep in mind that there is Show All Records on the Data side so [color:red]it is checking ALL 1500 RECORDS individually before allowing it to import in! It isn't even necessary to Show All Records on the Target side because FM validates unique even if the records aren't showing ( I scratched my head on that before it finally dawned on me ). The process is instant and flawless; oh, I tried very hard to break it, believe me. And the process appears to answer ALL my needs just by itself, ie, I don't mind running this every day or even every hour for that matter because it is instant! And I'm ditching Set Field[] as well. I started to run some of the other tests and comparisons but shut each of them off before they hit 5 seconds. I saw no point in continuing. A simple import which is instant beats ANY other approach hands down ( in my opinion ); at least for this need. Unless I am missing something obvious ( and I've been known to ) then I would say that you have nailed my problem to the wall ... bang zoom!! UPDATE: I did not bother increasing the number of records on the Target side when I realized that validate unique wouldn't be effected by number of records. If I am incorrect here, please let me know. Maybe I should test this with 150,000 records anyway but I'm convinced it wouldn't make (much) of a difference. :thankyou: Edited May 26, 2008 by Guest Added update
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 (edited) Sorry to go on and on but I went ahead and created 150,000 records in Target. It still takes only 1 second to import. 1 second. My GTRR method would isolate the new records but I would still need to import them. An import alone ... Michael's import using ADD only with validation ... solves the entire problem which started this thread. I will be reviewing many techniques throughout my systems because of this new understanding and appreciation of this method. Edited May 26, 2008 by Guest Changed a few words
SurferNate Posted May 26, 2008 Posted May 26, 2008 Ahhh, I understand better now, on both sides of the question. Your main goal is to periodically ADD a "copy" of any records from Data, to Trigger, that have changed since the last transaction (this is partially what I mean by transactional, the data is not manipulated "live", on demand, by a relationship, but explicitly, and in a controlled way, by a specific transaction). Then, also periodically, you have a script that "fires" emails based upon any new records added to Trigger since the last successful script run, yes? The portal is relational, the import is transactional. One might want to lookup Transactional Database on Wikipedia for a much better understanding than what I have so far (I could even be missing the point entirely, who knows)... Cool, I learned a whole bucket of good stuff here...
LaRetta Posted May 26, 2008 Author Posted May 26, 2008 (edited) I learned a bunch as well, Nate! The portal is relational, the import is transactional. The difference I see is this ( in my original concept ): If there is a relationship, script only needs to check if there is ANY related record ( a simple If[] test ) . If so, it proceeds and performs the import of all related; if not, script stops. Whereas, if we perform a Find and then Import, it would take quite a bit longer ( actually 5 seconds in my test ). This was why I wanted a relationship first and foremost; because it would be quicker than a standard find and then import. It isn't that the relationship would REPLACE anything but it would just allow a quick abort action without having to search for those related records in ANY way. Michael showed that we don't need to perform a find at all nor do we need a relationship ( notice lack of one in the graph ) but rather, we DON'T CARE. We simply import ALL Data records and validation does the filtering ( thus find/omit/import ) for us in a ONE-STEP WHACK - DONE!! I'm still very excited about this. Speed seems to take a second for every 2,000 data records ( actually it is 4 seconds for 10,300 Data records ). If Data were huge maybe this wouldn't be the way to go but I truly believe that any method would still be slower than this one. Okay, I'll finally shut up. Hopefully Michael will fill in any blanks for us. Edited May 26, 2008 by Guest
Recommended Posts
This topic is 6116 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now