| Type: | Package |
| Title: | Preparation, Checking and Post-Processing Data for PK/PDModeling |
| Version: | 0.2.2 |
| Maintainer: | Philip Delff <philip@delff.dk> |
| Description: | Efficient tools for preparation, checking and post-processing of data in PK/PD (pharmacokinetics/pharmacodynamics) modeling, with focus on use of Nonmem, including consistency, traceability, and Nonmem compatibility of Data. Rigorously checks final Nonmem datasets. Implemented in 'data.table', but easily integrated with 'base' and 'tidyverse'. |
| License: | MIT + file LICENSE |
| RoxygenNote: | 7.3.2 |
| Depends: | R (≥ 3.1.0) |
| Imports: | data.table, fst |
| Suggests: | testthat, knitr, NMsim, NMcalc, formatR, mime, rmarkdown,ggplot2, tibble, covr, htmltools, spelling |
| Encoding: | UTF-8 |
| BugReports: | https://github.com/nmautoverse/NMdata/issues |
| Language: | en-US |
| URL: | https://nmautoverse.github.io/NMdata/ |
| NeedsCompilation: | no |
| Packaged: | 2025-11-03 17:45:44 UTC; philipde |
| Author: | Philip Delff [aut, cre], Brian Reilly [ctb], Eric Anderson [ctb] |
| Repository: | CRAN |
| Date/Publication: | 2025-11-04 13:40:02 UTC |
Translate Nonmem filters to R code and apply to data
Description
Translate Nonmem filters to R code and apply to data
Usage
NMapplyFilters(data, file, lines, filters, invert = FALSE, as.fun, quiet)Arguments
data | An input data object. Could be read with NMreadCsv orNMscanInput. |
file | Path to mod/lst file. Only one of file or linesto be given. See '?NMreadSection' for understanding when to use,file, or lines. Only used when 'filters' not provided. |
lines | The mod/lst as character, line by line. |
filters | A 'data.frame' with filters as returned by'NMreadFilters()'. If not supplied, filters will be read from'file'/'lines'. |
invert | Invert the filters? This means read what Nonmemwould disregard, and disregard what Nonmem would read. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
quiet | Don't report information along the way if no warningsor errors. Default is FALSE. |
Details
This is not bulletproof. Nested conditions are notsupported altogether.
Value
data with filters applied
See Also
NMreadFilters
Other Nonmem:NMextractText(),NMgenText(),NMreadSection(),NMreplaceDataFile(),NMwriteSection()
Compare $INPUT in control stream to column names in input data
Description
Mis-specification of column names in $DATA is a common source ofproblems with Nonmem models, and should be one of the first thingsto check for when seemingly inexplicable things happen. Thisfunction lines up input data column names with $DATA and howNMscanData will interpret $DATA so you can easily spot ifsomething is off.
Usage
NMcheckColnames(file, as.fun, ...)Arguments
file | A Nonmem control stream or list file |
as.fun | See ?NMdataConf |
... | Additional arguments passed to |
Value
An overview of input column names and how they are translated
Check data for Nonmem compatibility or check control stream fordata compatibility
Description
Check data in various ways for compatibility with Nonmem. Somefindings will be reported even if they will not make Nonmem failbut because they are typical dataset issues.
Usage
NMcheckData( data, file, covs, covs.occ, cols.num, col.id = "ID", col.time = "TIME", col.dv = "DV", col.mdv = "MDV", col.cmt = "CMT", col.amt = "AMT", col.flagn, col.row, col.usubjid, cols.dup, type.data = "est", cols.disable, na.strings, return.summary = FALSE, quiet = FALSE, as.fun)Arguments
data | The data to check. |
file | Alternatively to checking a data object, you can usefile to specify a control stream to check. This can either bea (working or non-working) input control stream or an outputcontrol stream. In this case, |
covs | columns that contain subject-level covariates. Theyare expected to be non-missing, numeric and not varying withinsubjects. |
covs.occ | A list specifying columns that containsubject:occasion-level covariates. They are expected to benon-missing, numeric and not varying within combinations ofsubject and occasion. |
cols.num | Columns that are expected to be present, numericand non-NA. If a character vector is given, the columns areexpected to be used in all rows. If a column is only used fora subset of rows, use a list and name the elements bysubsetting strings. See examples. |
col.id | The name of the column that holds the subjectidentifier. Default is "ID". |
col.time | The name of the column holding actual time. |
col.dv | The name of the column holding the dependentvariable. For now, only one column can be specified, and |
col.mdv | The name of the column holding the binary indicatorof the dependent variable missing. Default is |
col.cmt | The name(s) of the compartment column(s). Thesewill be checked to be positive integers for all rows. They arealso used in checks for row duplicates. |
col.amt | The name of the dose amount column. |
col.flagn | Optionally, the name of the column holdingnumeric exclusion flags. Default value is |
col.row | A column with a unique value for each row. Such acolumn is recommended to use if possible. Default( |
col.usubjid | Optional unique subject identifier. It isrecommended to keep a unique subject identifier (typically acharacter string including an abbreviated study name and thesubject id) from the clinical datasets in the analysis set. Ifyou supply the name of the column holding this identifier,NMcheckData will check that it is non-missing, that it isunique within values of col.id (i.e. that the analysis subjectID's are unique across actual subjects), and that col.id isunique within the unique subject ID (a violation of the latteris less likely). |
cols.dup | Additional column names to consider in search ofduplicate events. |
type.data |
|
cols.disable | Columns to not check. This is particularlyuseful when checking data sets that do not include i.e. 'CMT','EVID', and others. To skip checking specific columns, providetheir names like 'cols.disable=c("CMT","EVID")'. |
na.strings | Strings to be accepted when trying to convertcharacters to numerics. This will typically be a string thatrepresents missing values. Default is ".". Notice, actual |
return.summary | If TRUE (not default), the table summarythat is printed if |
quiet | Keep quiet? Default is not to. |
as.fun | The default is to return data as a |
Details
The following checks are performed. The term "numeric"does not refer to a numeric representation in R, butcompatibility with Nonmem. The character string "2" is in thissense a valid numeric, "id2" is not.
Columnnames must be unique and not contain special characters
If an exclusion flag is used (for ACCEPT/IGNORE in Nonmem),elements must be non-missing and integers. Notice, if an exclusionflag is found, the rest of the checks are performed on rowswhere that flag equals 0 (zero) only.
If a unique row identifier is found, it has to benon-missing, increasing integers.
col.time (TIME),
EVID, col.id (ID), col.cmt (CMT), andcol.mdv(MDV): If present, elements must be non-missingand numeric.col.time (TIME) must be non-negative
EVIDmust be in {0,1,2,3,4}.CMT must be positive integers. However, can be missing or zero for
EVID==3.MDV must be the binary (1/0) representation of
is.na(DV)fordosing records (EVID==0).AMT must be 0 or
NAforEVID0 and 2AMT must be positive for
EVID1 and 4DV must be numeric
DV must be missing for
EVIDin {1,4}.If found, RATE must be a numeric, equaling -2 or non-negative for dosing events.
If found, SS must be a numeric, equaling 0 or 1 for dosing records.
If found,
ADDLmust be a non-negative integer for dosingrecords. II must be present.If found, II must be a non-negative integer for dosingrecords.
ADDLmust be present.ID must be positive and values cannot be disjoint (allrecords for each ID must be following each other. This istechnically not a requirement in Nonmem but most often anerror. Use a second ID column if you deliberately want tosoften this check)
TIME cannot be decreasing within ID, unless
EVIDin {3,4}.all ID's must have doses (
EVIDin {1,4})all ID's must have observations (
EVID==0)ID's should not have leading zeros since these will be lostwhen Nonmem read, then write the data.
If a unique row identifier is used, this must benon-missing, increasing, integer
Character values must not contain commas (they will mess upwriting/reading csv)
Columns specified in covs argument must be non-missing,numeric and not varying within subjects.
Columns specified in
covs.occmust benon-missing, numeric and not varying within combinations ofsubject and occasion.Columns specified in
cols.nummust be present, numericand non-NA.If a unique subject identifier column (
col.usubjid) isprovided, 'col.id' must be unique within values ofcol.usubjidandvice versa.Events should not be duplicated. For all rows, thecombination of
col.id,col.cmt,col.evid,col.timeplus theoptional columns specified incols.dupmust be unique. In otherwords, if a subject (col.id) that has say observations (col.evid)at the same time (col.time), this is considered a duplicate. Theexception is if there is a reset event (col.evidis 3 or 4) inbetween the two rows. cols.dup can be used to add columns to thisanalysis. This is useful for different assays run on the samecompartment (say a DVID column) or maybe stacked datasets. Ifcol.cmt is of length>1, this search is repeated for each cmtcolumn.
Value
A table with findings
Examples
## Not run: dat <- readRDS(system.file("examples/data/xgxr2.rds", package="NMdata"))NMcheckData(dat)dat[EVID==0,LLOQ:=3.5]## expecting LLOQ only for samplesNMcheckData(dat,cols.num=list(c("STUDY"),"EVID==0"=c("LLOQ")))## End(Not run)check input data based on control stream
Description
Finds input data and checks compatibility with Nonmem controlstream and runs NMcheckData. Don't call this function directly -use the file argument in NMcheckData instead.
Usage
NMcheckDataFile( file, col.row, col.id = "ID", formats.read = "csv", quiet = FALSE, file.mod, dir.data, as.fun, use.rds, ...)Arguments
file | a model file (input or output control stream) |
col.row | row identifier |
col.id | subject identifier |
formats.read | Prioritized input data file formats to lookfor and use if found. Default is c("csv") which means |
quiet | Keep quiet? Default is FALSE. |
file.mod | How to find the input control stream if you areusing the output control stream. |
dir.data | The data directory can only be read from thecontrol stream (.mod) and not from the output file (.lst). Soif you only have the output control stream, use dir.data totell in which directory to find the data file. If dir.data isprovided, the .mod file is not used at all. |
as.fun | The function to run results through before returningthem. |
use.rds | Deprecated. Use formats.read instead. |
... | passed to NMcheckData |
Value
A list of diagnostics
Translate Nonmem $PK, $PRED sections or other Nonmem code to R code
Description
Translate Nonmem $PK, $PRED sections or other Nonmem code to R code
Usage
NMcode2R(text)Arguments
text | the Nonmem code. |
Details
You probably want to run this on text obtained by usingthe NMreadSection function.
Value
R code as text
Configure default behavior of NMdata functions
Description
Configure default behavior across the functions in NMdata ratherthan typing the arguments in all function calls. Configure foryour file organization, data set column names, and other NMdatabehavior. Also, you can control what data class NMdata functionsreturn (say data.tables or tibbles if you prefer one of those overdata.frames).
Usage
NMdataConf(..., allow.unknown = FALSE, summarize = FALSE)Arguments
... | NMdata options to modify. These are named arguments,like for base::options. Normally, multiple arguments can beused. The exception is if reset=TRUE is used which means alloptions are restored to default values. If NULL is passed toan argument, the argument is reset to default. is Seeexamples for how to use. |
allow.unknown | Allow to store configuration of variablesthat are not pre-defined in NMdata. This should only be neededin cases where say another package wants to use the NMdataconfiguration system for variables unknown to NMdata. |
summarize | If |
Details
Parameters that can be controlled are:
args.fread Arguments passed to fread when reading _input_data files (fread options for reading Nonmem output tables cannotbe configured at this point). If you change this, you are startingfrom scratch, except from file. This means that existing defaultargument values are all disregarded.
args.fwrite Arguments passed to fwrite when writing csvfiles (NMwriteData). If you use this, you have to supply allarguments you want to use with fwrite, except for x (the data) andfile.
as.fun A function that will be applied to data returned by variousdata reading functions (NMscanData, NMreadTab, NMreadCsv, NMscanInput,NMscanTables). Also, data processing functions like mergeCheck, findCovs,findVars, flagsAssign, flagsCount take this into account, but slightlydifferently. For these functions that take data as arguments, the as.funconfiguration is only taken into account if a the data passed to thefunctions are not of class data.table. The argument as.fun to thesefunctions is always adhered to. Pass an actual function, sayas.fun=tibble::as_tibble. If you want data.table, use as.fun="data.table"(not a function).
check.time Logical, applies to NMscanData only. NMscanData bydefaults checks if output control stream is newer than input control streamand input data. Set this to FALSE if you are in an environment where timestamps cannot be relied on.
col.flagc The name of the column containing the characterflag values for data row omission. Default value is flag. Usedby flagsAssign, flagsCount.
col.flagn The name of the column containing numerical flagvalues for data row omission. Default value is FLAG. Used byflagsAssign, flagsCount, NMcheckData.
col.model The name of the column that will hold the name ofthe model. See modelname too (which defines the values that thecolumn will hold).
col.nmout A column of this name will be a logicalrepresenting whether row was in output table or not.
col.nomtime The name of the column holding nominaltime. This is only used for sorting columns by NMorderColumns.
col.row The name of the column containing a unique rowidentifier. This is used by NMscanData when merge.by.row=TRUE, andby NMorderColumns (row counter will be first column in data).
col.id The name of the column holding the numeric subjectID. As of 'NMdata' 0.1.5 this is only used for sorting columns byNMorderColumns.
col.time The name of the column holding actual time. As of'NMdata' 0.1.5 this is only used for sorting columns byNMorderColumns.
dir.psn The directory in which to find psn executables like'execute' and 'update_inits'. Default is "" meaning thatexecutables must be in the system search path. Not used by NMdata.
dir.res Directory in which 'NMsim' will store simulationresults files. Not used by NMdata. See dir.sims too.
dir.sims Directory in which 'NMsim' will store Nonmemsimulations. Not used by NMdata. See dir.res too.
file.cov A function that will derive the path to thecovariance (.cov) output file stream based on the path to theoutput control stream. Technically, it can be a string too, butwhen using NMdataConf, this would make little sense because itwould direct all output control streams to the same input controlstreams.
file.ext A function that will derive the path to theparameter (.ext) output file stream based on the path to theoutput control stream. Technically, it can be a string too, butwhen using NMdataConf, this would make little sense because itwould direct all output control streams to the same input controlstreams.
file.mod A function that will derive the path to the inputcontrol stream based on the path to the output controlstream. Technically, it can be a string too, but when usingNMdataConf, this would make little sense because it would directall output control streams to the same input control streams.
file.phi A function that will derive the path to the Nonmemoutput (.phi) file containing individual ETA, ETC, and/or PHIvalues stream based on the path to the output controlstream. Technically, it can be a string too, but when usingNMdataConf, this would make little sense because it would directall output control streams to the same input control streams.
file.data A function that will derive the path to the inputdata based on the path to the output control stream. Technically,it can be a string too, but when using NMdataConf, this would makelittle sense because it would direct all output control streams tothe same input control streams.
formats.read Prioritized input data file formats to lookfor and use if found. Default is c("rds","csv") which means
rdswill be used if found, andcsvifnot.fstis possible too.formats.write character vector of formats.write. Default isc("csv","rds"). "fst" is possible too.
merge.by.row Adjust the default combine method inNMscanData.
modelname A function that will translate the output control streampath to a model name. Default is to strip .lst, so /path/to/run1.lst willbecome run1. Technically, it can be a string too, but when using NMdataConf,this would make little sense because it would translate all output controlstreams model name.
path.nonmem Path (a character string) to a nonmemexecutable. Not used by NMdata. Default is NULL.
quiet For non-interactive scripts, you can switch off thechatty behavior once and for all using this setting.
recover.rows In NMscanData, Include rows from input datafiles that do not exist in output tables? This will be addedto the $row dataset only, and $run, $id, and $occ datasets arecreated before this is taken into account. A column callednmout will be TRUE when the row was found in output tables,and FALSE when not. Default is FALSE.
use.input In NMscanData, merge with columns in input data?Using this, you don't have to worry about remembering includingall relevant variables in the output tables. Default is TRUE.
use.rds Deprecated, use
formats.readandformats.writeinstead. AffectsNMscanData(),NMscanInput(),NMwriteData().
Recommendation: Usethis function transparently in the code and not in a configuration filehidden from other users.
Value
If no arguments given, a list of active settings. Ifarguments given and no issues found, TRUE invisibly.
Examples
## get current defaultsNMdataConf()## change a parameterNMdataConf(check.time=FALSE)## reset one parameter to default valueNMdataConf(modelname=NULL)## reset all parameters to defaultsNMdataConf(reset=TRUE)Get NMdataConf parameter properties
Description
Get NMdataConf parameter properties
Usage
NMdataConfOptions(name, allow.unknown = TRUE)Arguments
name | Optionally, a single parameter name (say "as.fun"). |
allow.unknown | Allow access to configuration of variablesthat are not pre-defined in NMdata. This should only be neededin cases where say another package wants to use the NMdataconfiguration system for variables unknown to NMdata. |
Value
If name is provided, a list representing one argument,otherwise a list with an element for each argument that can becustomized using NMdataConf.
Determine active parameter value based on argument and NMdataConf setting
Description
Determine active parameter value based on argument and NMdataConf setting
Usage
NMdataDecideOption(name, argument, allow.unknown = FALSE)Arguments
name | The name of the parameter, say "as.fun" |
argument | The value to pass. If missing or NULL, the value returned by NMdataConf/NMdataGetOption will typically be used. |
Value
Active argument value.
Look up default configuration of an argument
Description
Look up default configuration of an argument
Usage
NMdataGetOption(...)Arguments
... | argument to look up. Only one argument can be looked up. |
Value
The value active in configuration
Basic arithmetic on NMdata objects
Description
Basic arithmetic on NMdata objects
Usage
## S3 method for class 'NMdata'merge(x, ...)## S3 method for class 'NMdata't(x, ...)## S3 method for class 'NMdata'dimnames(x, ...)## S3 method for class 'NMdata'rbind(x, ...)## S3 method for class 'NMdata'cbind(x, ...)Arguments
x | an NMdata object |
... | arguments passed to other methods. |
Details
When 'dimnames', 'merge', 'cbind', 'rbind', or 't' iscalled on an 'NMdata' object, the 'NMdata' class is dropped,and then the operation is performed. So if and 'NMdata' objectinherits from 'data.frame' and no other classes (which isdefault), these operations will be performed using the'data.frame' methods. But for example, if you use 'as.fun' toget a 'data.table' or 'tbl', their respective methods are usedinstead.
Value
An object that is not of class 'NMdata'.
Transform repeated dosing events (ADDL/II) to individual dosing events
Description
Replaces single row repeated dosing events by multiple lines, thenreorders rows with respect to ID and TIME. If the row order isdifferent, you have to reorder the output manually.
Usage
NMexpandDoses( data, col.time = "TIME", col.id = "ID", col.evid = "EVID", track.expand = FALSE, subset.dos, quiet = FALSE, as.fun)Arguments
data | The data set to expand |
col.time | The name of the column holding the time on whichtime since previous dose will be based. This is typicallyactual or nominal time since first dose. |
col.id | The subject identifier. All new columns will bederived within unique values of this column. |
col.evid | The name of the event ID column. This must existin data. Default is EVID. |
track.expand | Keep track of what rows were in dataoriginally and which ones are added by NMexpandDoses byincluding a column called nmexpand? nmexpand will be TRUE ifthe row is "generated" by NMexpandDoses. |
subset.dos | A string that will be evaluated as a customexpression to identify relevant events. |
quiet | Suppress messages back to user (default is FALSE) |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
Value
A data set with at least as many rows as data. If dosesare found to expand, these will be added.
Extract the data file used in a control stream
Description
A function that identifies the input data file based on a controlstream. The default is to look at the $DATA section of of theoutput control stream (or input control stream if file.modargument is used). This can be partly or fully overruled by usingthe dir.data or file.data arguments.
Usage
NMextractDataFile(file, dir.data = NULL, file.mod, file.data = NULL)Arguments
file | The input control stream or the list file. |
dir.data | See NMscanInput. If used, only the file namementioned in $DATA is used. dir.data will be used as the path,and the existence of the file in that directory is notchecked. |
file.mod | The input control stream. Default is to look for\"file\" with extension changed to '.mod' (PSN style). You canalso supply the path to the file, or you can provide afunction that translates the output file path to the inputfile path. The default behavior can be configured usingNMdataConf. See dir.data too. |
file.data | Specification of the data file path. When this isused, the control streams are not used at all. |
Value
The path to the input data file.
Versatile text extractor from Nonmem (input or output) control streams
Description
If you want to extract input sections like $PROBLEM, $DATA etc,see NMreadSection. This function is more general and can be used toextract eg result sections.
Usage
NMextractText( file, lines, text, section, char.section, char.end = char.section, return = "text", keep.empty = FALSE, keep.name = TRUE, keep.comments = TRUE, as.one = TRUE, clean.spaces = FALSE, simplify = TRUE, match.exactly = TRUE, type = "mod", linesep = "\n", keepEmpty, keepName, keepComments, asOne)Arguments
file | A file path to read from. Normally a .mod or .lst. Seelines and text as well. |
lines | Text lines to process. This is an alternative tousing the file and text arguments. |
text | Use this argument if the text to process is one longcharacter string, and indicate the line separator with thelinesep argument. Use only one of file, lines, and text. |
section | The name of section to extract. Examples: "INPUT","PK", "TABLE", etc. It can also be result sections like"MINIMIZATION". |
char.section | The section denoted as a string compatiblewith regular expressions. "$" (remember to escape properly)for sections in .mod files, "0" for results in .lst files. |
char.end | A regular expression to capture the end of thesection. The default is to look for the next occurrence ofchar.section. |
return | If "text", plain text lines are returned. If "idx",matching line numbers are returned. "text" is default. |
keep.empty | Keep empty lines in output? Default isFALSE. Notice, comments are removed before empty lines arehandled if 'keep.comments=TRUE'. |
keep.name | Keep the section name in output (say, "$PROBLEM")Default is TRUE. It can only be FALSE, if return="text". |
keep.comments | Default is to keep comments. If FALSE, thewill be removed. |
as.one | If multiple hits, concatenate into one. This willmost often be relevant with name="TABLE". If FALSE, a listwill be returned, each element representing a table. Defaultis TRUE. So if you want to process the tables separately, youprobably want FALSE here. |
clean.spaces | If TRUE, leading and trailing are removed, andmultiplied succeeding white spaces are reduced to single whitespaces. |
simplify | If asOne=FALSE, do you want the result to besimplified if only one table is found? Default is TRUE whichis desirable for interactive analysis. For programming, youprobably want FALSE. |
match.exactly | Default is to search for exact matches of'section'. If FALSE, only the first three characters arematched. E.G., this allows "ESTIMATION" to match "ESTIMATION"or "EST". |
type | Either mod, res or NULL. mod is for information thatis given in .mod (.lst file can be used but results section isdisregarded). If NULL, NA or empty string, everything isconsidered. |
linesep | If using the text argument, use linesep to indicatehow lines should be separated. |
keepEmpty | Deprecated. See keep.empty. |
keepName | Deprecated. See keep.name. |
keepComments | Deprecated. See keep.comments. |
asOne | Deprecated. See as.one. |
Details
This function is planned to get a more general name andthen be called by NMreadSection.
Value
character vector with extracted lines.
See Also
Other Nonmem:NMapplyFilters(),NMgenText(),NMreadSection(),NMreplaceDataFile(),NMwriteSection()
Examples
NMreadSection(system.file("examples/nonmem/xgxr001.lst", package = "NMdata"),section="DATA")Generate text for INPUT and possibly DATA sections of NONMEMcontrol streams.
Description
The user is provided with text to use in Nonmem. NMwriteSectioncan use the results to update the control streams. INPUT listsnames of the data columns while DATA provides a path to data andACCEPT/IGNORE statements. Once a column is reached that Nonmemwill not be able to read as a numeric and column is not innm.drop, the list is stopped. Only exception is TIME which is nottested for whether character or not.
Usage
NMgenText( data, drop, col.flagn, rename, copy, file, dir.data, capitalize = FALSE, until, allow.char.TIME = TRUE, width, quiet)Arguments
data | The data that NONMEM will read. Either as a'data.frame', of if a path to an rds or a delimited text file,the data will automatically be read first. |
drop | Only used for generation of proposed text for INPUTsection. Columns to drop in Nonmem $INPUT. This has twoimplications. One is that the proposed $INPUT indicates =DROPafter the given column names. The other that in case it is anon-numeric column, succeeding columns will still be includedin $INPUT and can be read by NONMEM. |
col.flagn | Name of a numeric column with zero value for rowsto include in Nonmem run, non-zero for rows to skip. Theargument is only used for generating the proposed $DATA textto paste into the Nonmem control stream. Default is defined by'NMdataConf()'. To skip this feature, use 'col.flagn=FALSE'. |
rename | For the $INPUT text proposal only. If you want torename columns in NONMEM $DATA, NMwriteData can adjust thesuggested $DATA text. If you plan to use BBW instead of BWBASEin Nonmem, consider rename=c(BBW="BWBASE"). The result willinclude BBW and not BWBASE. |
copy | For the $INPUT text proposal only. If you plan to useadditional names for columns in Nonmem $INPUT, NMwriteData canadjust the suggested $INPUT text. Say you plan to use CONC asDV in Nonmem, use copy=c(DV="CONC"),i.e. copy=c(newname="existing"). INPUT suggestion will in thiscase contain DV=CONC. |
file | The file name NONMEM will read the data from (for the$DATA section). It can be a full path. |
dir.data | For the $DATA text proposal only. The path to theinput datafile to be used in the Nonmem $DATA section. Often,a relative path to the actual Nonmem run is wanted here. Ifthis is used, only the file name and not the path from thefile argument is used. |
capitalize | For the $INPUT text proposal only. If TRUE, allcolumn names in $INPUT text will be converted to capitalletters. |
until | Use this to truncate the columns in $INPUT. until caneither be a character (column name) or a numeric (columnnumber). If a character is given, it is matched against theresulting column name representation in $INPUT, i.e. thiscould be "DV=CONC" if you are using in this case the copyargument. In case until is of length>1, the maximum will beused (probably only interesting if character values aresupplied). |
allow.char.TIME | For the $INPUT text proposal only. AssumeNonmem can read TIME and DATE even if it can't be translatedto numeric. This is necessary if using the 00:00format. Default is TRUE. |
width | If positive, will be passed to strwrap for the $INPUTtext. If missing or NULL, strwrap will be called with defaultvalue. If negative or zero, strwrap will not be called. |
quiet | Hold messages back? Default is defined by NMdataConf. |
Value
Text for inclusion in Nonmem control stream, invisibly. Alist with elements 'DATA' and 'INPUT'.
See Also
Other Nonmem:NMapplyFilters(),NMextractText(),NMreadSection(),NMreplaceDataFile(),NMwriteSection()
Get metadata from an NMdata object
Description
Extract metadata such as info on tables, columns and furtherdetails in your favorite class
Usage
NMinfo(data, info, as.fun)Arguments
data | An object of class NMdata (a result of 'NMscanData()') |
info | If not passed, all the metadata is returned. You canuse "details", "tables", or "columns" to get only thesesubsets. If info is "tables" or "columns" |
as.fun | The default is to return data as a 'data.frame'. Passa function (say 'tibble::as_tibble') in as.fun to convert tosomething else. If 'data.table's are wanted, use'as.fun="data.table"'. The default can be configured using'NMdataConf()'. |
Value
A table of class as defined by as.fun in case info is"columns" or "tables". A list if info missing or equal to"details".
Test if a variable can be interpreted by Nonmem
Description
Nonmem can only interpret numeric data. However, afactor or a character variable may very well be interpretableby Nonmem (e.g. "33"). This function tells whether Nonmem willbe able to read it.
Usage
NMisNumeric(x, na.strings = ".", each = FALSE)Arguments
x | The vector to check Don't export |
na.strings | Tolerated strings that do not translate tonumerics. Default is to accept "." because it's common towrite missing values that way to Nonmem (even if Nonmem willhandle them as zeros rather than missing). Notice actual NA'sare accepted so you may want to use na.strings=NULL if youdon't code missings as "." and just do this when writing thedata set to a delimited file (like NMwriteData will do foryou). |
each | Use each=TRUE to evaluate each element in a vectorindividually. The default (each=FALSE) is to return asingle-length logical for a vector x summarizing whether allthe elements are numeric-compatible. |
Value
TRUE or FALSE
Standardize column order in Nonmem input data
Description
Order data columns for easy export to Nonmem. No data values areedited. The order is configurable through multiple arguments. Seedetails.
Usage
NMorderColumns( data, first, last, lower.last = FALSE, chars.last = TRUE, alpha = TRUE, col.id, col.nomtime, col.time, col.row, col.flagn, col.dv = "DV", allow.char.TIME = TRUE, as.fun = NULL, quiet)Arguments
data | The dataset which columns to reorder. |
first | Columns that should come almost first. See details. |
last | Columns to move to back of dataset. If you work with alarge dataset, and some columns are irrelevant for the Nonmemruns, you can use this argument. |
lower.last | Should columns which names contain lowercasecharacters be moved towards the back? Some people use astandard of lowercase variables (say "race") being characterrepresentations ("Asian", "Caucasian", etc.) variables and theuppercase (1,2,...) being the numeric representation forNonmem. |
chars.last | Should columns which cannot be converted tonumeric be put towards the end? A column can be a character ora factor in R, but still be valid in Nonmem (often the casefor ID which can only contain numeric digits but really is acharacter or factor). So rather than only looking at thecolumn class, the columns are attempted converted tonumeric. Notice, it will attempted to be converted to numericto test whether Nonmem will be able to make sense of it, butthe values in the resulting dataset will be untouched. Novalues will be edited. If TRUE, logicals will always be putlast. NA's must be NA or ".". |
alpha | Sort columns alphabetically. Notice, this is the lastorder priority applied. |
col.id | Name of the (numeric) unique subject ID. Can becontrolled with 'NMdataConf()'. |
col.nomtime | The name of the column containing nominaltime. If given, it will put the column quite far left, justafter row counter and 'col.id'. Default value is NOMTIME and can beconfigured with 'NMdataConf()'. |
col.time | The name of the column containing actual time. Ifgiven, it will put the column quite far left, just after rowcounter, subject ID, and nominal time. Default value is 'TIME'. Can becontrolled with 'NMdataConf()'. |
col.row | A row counter column. This will be the first columnin the dataset. Technically, you can use it for whatevercolumn you want first. Default value is 'ROW' and can beconfigured with 'NMdataConf()'. |
col.flagn | The name of the column containing numerical flagvalues for data row omission. Default value is FLAG and can beconfigured with 'NMdataConf()'. |
col.dv | a vector of column names to put early to representdependent variable(s). Default is DV. |
allow.char.TIME | For the $INPUT text proposal only. AssumeNonmem can read TIME and DATE even if it can't be translated tonumeric. This is necessary if using the 00:00 format. Defaultis TRUE. |
as.fun | The default is to return a data.table if data is adata.table and return a data.frame in all other cases. Pass afunction in as.fun to convert to something else. The defaultcan be configured using 'NMdataConf()'. However, if data is adata.table, settings via 'NMdataConf()' are ignored. |
quiet | If true, no warning will be given about missingstandard Nonmem columns. |
Details
This function will change the order of columns but itwill never edit values in any columns. The ordering is by thefollowing steps, each step depending on correspondingargument.
- "col.row - "
Row id if argument row is non-NULL
- "not editable - "
ID (if a column is called ID)
- "col.nomtime - "
Nominal time.
- "col.time - "
Actual time.
- "first - "
user-specified first columns
- "Only col.dv editable - "
Standard Nonmem columns: EVID, CMT, AMT, RATE, col.dv, MDV
- "last - "
user-specified last columns
- "chars.last - "
numeric, or interpretable as numeric
- "not editable - "
less often used Nonmem names: col.flagn, OCC, ROUTE, GRP, TRIAL, DRUG, STUDY
- "lower.last - "
lower case in name
- "alpha - "
Alphabetic/numeric sorting
Value
data with modified column order.
See Also
Other DataCreate:NMstamp(),NMwriteData(),addTAPD(),findCovs(),findVars(),flagsAssign(),flagsCount(),mergeCheck(),tmpcol()
Read covariance matrix from '.cov' file
Description
Read covariance matrix from '.cov' file
Usage
NMreadCov(file, auto.ext, tableno = "max", simplify = TRUE)Arguments
file | The ".cov" covariance Nonmem matrix file to read |
auto.ext | If 'TRUE' (default) the extension willautomatically be modified using 'NMdataConf()$file.cov'. Thismeans 'file' can be the path to an input or output controlstream, and 'NMreadCov()' will still read the '.cov' file. |
tableno | The table number to read. The ".cov" file cancontain multiple tables and will often do so if using SAEM/IMPmethods. Default is "max" which means the last table isused. Alternative values are "min" and "all" or numericvalues. If "all" or multiple numeric values are used, a listis returned. However, see 'simplify' too. |
simplify | If 'TRUE' (default) and only one table is returned(say using tableno="max") only that matrix is returned as amatrix object. If 'FALSE' or multiple tables are returned, theresult is a list. |
Value
A matrix with covariance step from NONMEM or a list ofsuch matrices (see 'simplify')
Read input data formatted for Nonmem
Description
This function is especially useful if the csv file was writtenusing NMwriteData.
Usage
NMreadCsv(file, args.fread, as.fun = NULL, format, args.fst)Arguments
file | The file to read. Must be pure text. |
args.fread | List of arguments passed to fread. Notice thatexcept for "file", you need to supply all arguments to freadif you use this argument. Default values can be configuredusing NMdataConf. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
format | Format of file to read. Can be of length>1 in whichcase the first format found will be used (i.e. format is aprioritized vector). If not one of "rds" or "fst", it isassumed to be a delimited text file. Default is to determinethis from the file name extension. Notice, if a delimitedformat is used, the extension can very well be different from"csv" (say file name is"input.tab")". This will work for any delimited format supported by fread. |
args.fst | Optional arguments to pass to |
Details
This is almost just a shortcut to fread so you don't have to remember how to read the data that was exported for Nonmem. The only added feature is that meta data as written by NMwriteData is read and attached as NMdata metadata before data is returned.
Value
A data set of class as defined by as.fun.
See Also
NMwriteData
Other DataRead:NMreadTab(),NMscanData(),NMscanInput(),NMscanTables()
Read information from Nonmem ext files
Description
Read information from Nonmem ext files
Usage
NMreadExt( file, return, as.fun, modelname, col.model, auto.ext, tableno = "max", file.ext, slow)Arguments
file | Path to the ext file |
return | The .ext file contains both final parameterestimates and iterations of the estimates. If |
as.fun | The default is to return data as a |
modelname | See '?NMscanData' |
col.model | See '?NMscanData' |
auto.ext | If 'TRUE' (default) the extension willautomatically be modified using |
tableno | In case the ext file contains multiple tables, thisargument controls which one to choose. The options are
|
file.ext | Deprecated. Please use |
slow | Use a slow but more robust method to read tables? Ifmissing or 'NULL', the fast method will be tried first, and ifany issues are seen, the method will switch to 'slow=TRUE'. If'FALSE',it will also switch in case of issues, but a warning isissued. In other words, it should be safe to not use thisargument. |
Details
The parameter table returned ifreturn="pars" orreturn="all" will contain columns based on the Nonmem7.5 manual. It defines codes for different parameter-levelvalues. They are:
-1e+09: se-1000000002: eigCor-1000000003: cond-1000000004: stdDevCor-1000000005: seStdDevCor-1000000006: FIX-1000000007: termStat-1000000008: partLik
The parameter name is in theparameter column. The"parameter type", like "THETA", "OMEGA", "SIGMA" are available inthepar.type column. Counters are available ini andj columns.j will beNA forpar.type=="THETA"
The objective function value is included as a parameter.
Notice that in case multiple tables are available in the 'ext'file, the column names are taken from the first table. E.g., incase of SAEM/IMP estimation, the objective function values will bein theSAEMOBJ column, even for the IMP step. This may change inthe future.
Value
Ifreturn="all", a list with a final parametertable and a table of the iterations. Ifreturn="pars",only the parameter table, and ifreturn="iterations"only the iterations table. If you need both, it may be moreefficient to only read the file once and usereturn="all". Often, only one of the two are needed,and it more convenient to just extract one.
Read data filters from a NONMEM model
Description
Read data filters from a NONMEM model
Usage
NMreadFilters(file, lines, filters.only = TRUE, as.fun)Arguments
file | Control stream path |
lines | Control stream lines if already read from file |
filters.only | Return the filters only or also return the remaining text in a separate object? If 'FALSE', a list with the two objects is returned. |
as.fun | Function to run on the tables with filters. |
Value
A 'data.frame' with filters
Tabulate information from parameter sections in control streams
Description
Tabulate information from parameter sections in control streams
Usage
NMreadInits(file, lines, section, return = "pars", as.fun)Arguments
file | Path to a control stream. See 'lines' too. |
lines | A control stream as text lines. Use this or 'file'. |
section | The section to read. Typically, "theta", "omega",or "sigma". Default is those three. |
return | By default (when |
as.fun | See ?NMscanData |
Value
A 'data.frame' with parameter values. If 'return="all"', alist of three tables.
Read comments to parameter definitions in Nonmem control streams
Description
When interpreting parameter estimates, it is often needed torecover information about the meaning of the different parametersfrom control stream. 'NMreadParsText' provides a flexible way toorganize the comments in the parameter sections into a'data.frame'. This can subsequently easily be merged with parametervalues as obtained with 'NMreadExt'.
Usage
NMreadParsText( file, lines, format, format.omega = format, format.sigma = format.omega, spaces.split = FALSE, unique.matches = TRUE, field.idx = "idx", use.idx = FALSE, add.init = TRUE, modelname, col.model, as.fun, use.theta.idx, fields, fields.omega = fields, fields.sigma = fields.omega)Arguments
file | Path to the control stream to read. |
lines | As an alternative to 'file', the control stream orselected lines of the control stream can be provided as avector of lines. |
format | Defines naming and splitting of contents of lines inparameter sections. Default is |
format.omega | Like 'format', applied to '$OMEGA'section. Default is to reuse 'format'. |
format.sigma | Like 'format', applied to '$SIGMA'section. Default is to reuse 'format.omega'. |
spaces.split | Is a blank in 'fields' to be treated as afield separator? Default is not to (i.e. neglect spaces in'fields'). |
unique.matches | If TRUE, each line in the control stream isassigned to one parameter, at most. This means, if twoparameters are listed in one line, the comments will only beused for one of the parameters, and only that parameter willbe kept in output. Where this will typically happen is in'$OMEGA' and '$SIGMA' sections where off-diagonal may be puton the same line as diagonal elements. Since the off-diagonalelements are covariances of variables that have already beenidentified by the diagonals, the off-diagonal elements can beautomatically described. For example, if 'OMEGA(1,1)' isbetween-subject variability (BSV) on CL and 'OMEGA(2,2) is BSVon V, then we know that 'OMEGA(2,1)' is covariance of (BSV on)CL and V. |
field.idx | If an index field is manually provided in thecontrol stream comments, define the name of that field in'format' and tell 'NMreadParsText()' to use this idx toorganize especially OMEGA and SIGMA elements by pointing to itwith 'field.idx'. The default is to look for a variable called'idx'. If the index has values like 1-2 on an OMEGA or SIGMArow, the row is interpreted as the covariance betweenOMEGA/SIGMA 1 and 2. |
use.idx | The default method is to automatically identifyelement numbering ('i' for THETAs, 'i' and 'j' for OMEGAs andSIGMAs). The automated method is based on identification of'BLOCK()' structures and numbers of initial values. Shouldthis fail, or should you want to control this manually, youcan include a parameter counter in the comments and have'NMreadParsText()' use that to assign thenumbering. 'use.idx=FALSE' is default and means all blocks arehandled automatically, 'use.idx=TRUE' assumes you have acounter in all sections, and a character vector like'use.idx="omega"' can be used to denote which sections usesuch a counter from the control stream. When using a counteron OMEGA and SIGMA, off-diagonal elements MUST be denoted by'i-j', like '2-1' for OMEGA(2,1). See 'field.idx' too. |
add.init | If 'TRUE' (default), a field will automatically beadded to the formats for the initial value string. This willbe called "initstr". It will only happen if the first field isnot called either "initstr" or "init". The only situationwhere one would use 'add.init=FALSE' is if a different namefor the initial value field is already included in formats. |
modelname | See ?NMscanData |
col.model | See ?NMscanData |
as.fun | See ?NMscanData |
use.theta.idx | If an index field in comments should be usedto number thetas. The index field is used to organize'$OMEGA's and '$SIGMA's because they are matrices but I do notsee where this is advantageous to do for '$THETA's. Default'use.theta.idx=FALSE' which means '$THETA's are simplycounted. |
fields | Deprecated. Use 'format'. |
fields.omega | Deprecated. Use 'format.omega'. |
fields.sigma | Deprecated. Use 'format.sigma'. |
Details
Off-diagonal omega and sigma elements will only becorrectly treated if their num field specifies say 1-2 tospecify it is covariance between 1 and 2.
SAME elements in$OMEGA will be skipped altogether.
Value
data.frame with parameter names and fields read fromcomments
Examples
## setDTthreads() is only needed for CRAN. Users should not do this.data.table::setDTthreads(1)## end setDTthreads() for CRAN## notice, examples on explicitly stated lines. Most often in## practice, one would use the file argument to automatically## extract the $THETA, $OMEGA and $SIGMA sections from a control## stream.text <- c("$THETA (.1) ;[1]; LTVKA (mL/h)$OMEGA BLOCK(3)0.126303 ; IIV.CL ; 1 ;IIV ;Between-subject variability on CL;-0.024 ; IIV.CL.V2.cov ; 1-2 ;IIV ;Covariance of BSV on CL and V2;-0.127 ; IIV.V2 ; 2 ;IIV ;Between-subject variability on V2;-0.2 ; IIV.CL.V3.cov ; 1-3 ;IIV ;Covariance of BSV on CL and V3;-0.2 ; IIV.V2.V3.cov ; 2-3 ;IIV ;Covariance of BSV on V2 and V3;-0.38 ; IIV.V3 ; 3 ;IIV ;Between-subject variability on V3;-$OMEGA 0 FIX ; IIV.KA ; 4 ;IIV ;Between-subject variability on KA;-$SIGMA 1") lines <- strsplit(text,split="\n")[[1]]res <- NMreadParsText(lines=lines,format="%init;[%num];%symbol",format.omega="%init; %symbol ; %num ; %type ; %label ; %unit",field.idx="num")## BLOCK() SAME are skippedtext <- c("$THETA(0,0.1) ; THE1 - 1) 1st theta(0,4.2) ; THE2 - 2) 2nd theta$OMEGA 0.08 ; IIV.TH1 ; 1 ;IIV$OMEGA BLOCK(1)0.547465 ; IOV.TH1 ; 2 ;IOV$OMEGA BLOCK(1) SAME$OMEGA BLOCK(1) SAME")lines <- strsplit(text,split="\n")[[1]]res <- NMreadParsText(lines=lines, format="%init;%symbol - %idx) %label", format.omega="%init; %symbol ; %idx ; %label " )Read information from Nonmem phi files
Description
Read information from Nonmem phi files
Usage
NMreadPhi(file, as.fun, modelname, col.model, auto.ext, file.phi)Arguments
file | Path to the phi file. See 'auto.ext' too. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
modelname | See ?NMscanData |
col.model | See ?NMscanData |
auto.ext | If 'auto.ext=TRUE', the file name extension willautomatically be changed using the setting in'NMdataConf()$file.phi' - this by default means that the'.phi' extension will be used no matter what extension theprovided file name has. |
file.phi | Deprecated. Use 'file'. |
Value
A list with a final parameter table and a table of theiterations
Extract sections of Nonmem control streams
Description
This is a very commonly used wrapper for the input part of themodel file. Look NMextractText for more general functionalitysuitable for the results part too.
Usage
NMreadSection( file = NULL, lines = NULL, text = NULL, section, return = "text", keep.empty = FALSE, keep.name = TRUE, keep.comments = TRUE, as.one = TRUE, clean.spaces = FALSE, simplify = TRUE, keepEmpty, keepName, keepComments, asOne, ...)Arguments
file | A file path to read from. Normally a .mod or .lst. Seelines also. |
lines | Text lines to process. This is an alternative tousing the file argument. |
text | Deprecated, use 'lines'. Use this argument if the textto process is one long character string, and indicate the lineseparator with the linesep argument (handled byNMextractText). Use only one of file, lines, and text. |
section | The name of section to extract without"$". Examples: "INPUT", "PK", "TABLE", etc. Not casesensitive. |
return | If "text", plain text lines are returned. If "idx",matching line numbers are returned. "text" is default. |
keep.empty | Keep empty lines in output? Default isFALSE. Notice, comments are removed before empty lines arehandled if 'keep.comments=TRUE'. |
keep.name | Keep the section name in output (say, "$PROBLEM")Default is FALSE. It can only be FALSE, if return="text". |
keep.comments | Default is to keep comments. If FALSE, thewill be removed. See keep.empty too. Notice, there is no wayfor NMreadSection to keep comments and also drop lines thatonly contain comments. |
as.one | If multiple hits, concatenate into one. This willmost often be relevant with name="TABLE". If FALSE, a listwill be returned, each element representing a table. Defaultis TRUE. So if you want to process the tables separately, youprobably want FALSE here. |
clean.spaces | If TRUE, leading and trailing are removed, andmultiplied succeeding white spaces are reduced to single whitespaces. |
simplify | If asOne=FALSE, do you want the result to besimplified if only one section is found? Default is TRUE whichis desirable for interactive analysis. For programming, youprobably want FALSE. |
keepEmpty | Deprecated. See keep.empty. |
keepName | Deprecated. See keep.name. |
keepComments | Deprecated. See keep.comments. |
asOne | Deprecated. See as.one. |
... | Additional arguments passed to NMextractText |
Value
character vector with extracted lines.
See Also
Other Nonmem:NMapplyFilters(),NMextractText(),NMgenText(),NMreplaceDataFile(),NMwriteSection()
Examples
NMreadSection(system.file("examples/nonmem/xgxr001.lst", package="NMdata"),section="DATA")Read Shrinkage data reported by Nonmem
Description
Read Shrinkage data reported by Nonmem
Usage
NMreadShk(file, auto.ext, as.fun)Arguments
file | A model file. Extension will be replaced by ".shk". |
auto.ext | If 'TRUE' (default) the extension will automaticallybe modified using 'NMdataConf()$file.shk'. This means 'file'can be the path to an input or output control stream, and'NMreadShk' will still read the '.shk' file. |
as.fun | See ?NMdataConf |
Details
Type 1=etabarType 2=Etabar SEType 3=P valType 4=Type 5=Type 6=Type 7=number of subjects used.Type 8=Type 9=Type 10=Type 11=
Value
A 'data.frame' with shrinkage values, indexes, and name of related parameter, like 'OMEGA(1,1)'.
Read SIZES info from a control stream
Description
Read SIZES info from a control stream
Usage
NMreadSizes(file.mod = NULL, lines = NULL)Arguments
file.mod | Control stream path. |
lines | Character vector with control stream file. |
Value
A list with SIZES parameter values
Read an output table file from Nonmem
Description
Read a table generated by a $TABLE statement in Nonmem. Generally,these files cannot be read by read.table or similar becauseformatting depends on options in the $TABLE statement, and becauseNonmem sometimes includes extra lines in the output that have tobe filtered out. NMreadTab can do this automatically based on thetable file alone.
Usage
NMreadTab( file, col.tableno, col.nmrep, col.table.name, header = TRUE, skip, quiet = TRUE, as.fun, ...)Arguments
file | path to Nonmem table file |
col.tableno | In case of simulations where tables are beingrepeated, a counter of the repetition number can be useful toinclude in the output. For now, this will only work if theNOHEADER option is not used. This is because NMreadTabsearches for the "TABLE NO..." strings in Nonmem outputtables. If col.tableno is TRUE (default), a counter of tablesis included as a column called NMREP. Notice, the tablenumbers in NMREP are cumulatively counting the number oftables reported in the file. NMREP is not the actual tablenumber as given by Nonmem. |
col.nmrep | col.nmrep If tables are repeated, include acounter? It does not relate to the order of the $TABLEstatements but to cases where a $TABLE statement is runrepeatedly. E.g., in combination with the SUBPROBLEMS featurein Nonmem, it is useful to keep track of the table(repetition) number. If col.nmrep is TRUE, this will becarried forward and added as a column called NMREP. This isdefault behavior when more than one $TABLE repetition is foundin data. Set it to a different string to request the columnwith a different name. The argument is passed to NMscanTables. |
col.table.name | The name of a column containing the name ordescription of the table (generated by Nonmem). The default is"table.name". Use FALSE not to include this column. |
header | Use header=FALSE if table was created with NOHEADERoption in $TABLE. |
skip | The number of rows to skip. The default is skip=1 ifheader==TRUE and skip=0 if header==FALSE. |
quiet | logical stating whether or not information is printedabout what is being done. Default can be configured usingNMdataConf. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
... | Arguments passed to |
Details
The actual reading of data is based ondata.table::fread. Generally, the function is fast thanks todata.table.
Value
The Nonmem table data.
See Also
Other DataRead:NMreadCsv(),NMscanData(),NMscanInput(),NMscanTables()
Read Nonmem table files without assumptions about what tables theycontain
Description
Read Nonmem table files without assumptions about what tables theycontain
Usage
NMreadTabSlow(file, col.table.name = TRUE)Arguments
file | A Nonmem table file. Can be output tables, or one ofthe several different results files from Nonmem. |
col.table.name | Name of the column (to be created)containing the "table name" which is derived from the Nonmemdescription of the table sometimes pasted above the tabledata. |
Details
'NMreadTabSlow' reads parameter tables from Nonmem veryslowly, and most often 'NMreadTab' is a better function touse. However, 'NMreadTabslow' also works for table files thatcontain incompatible tables.
Relate parameter names and variables based on control stream codesections.
Description
Relate parameter names and variables based on control stream codesections.
Usage
NMrelate(file, lines, modelname, par.type, col.model, sections, as.fun)Arguments
file | Path to a control stream to process. See 'lines' too. |
lines | If the control stream has been read already, the textcan be provided here instead of using the 'file'argument. Character vector of text lines. |
modelname | Either a model name (like "Base") or a functionthat derives the model name from the control stream filepath. The default is dropping the file name extension on thecontrol stream file name. |
par.type | Parameter type(s) to include. Default is all threepossible which is |
col.model | Name of the column containing the model name. |
sections | Sections of the control stream toconsider. Default is all of |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
Details
'NMrelate()' processes $PRED, $PK and $ERROR sections. Itdoes not read ext files or $THETA, $OMEGA, $SIGMA sections togain information but only extracts what it can from the modelcode. You can then merge with information from functions suchas 'NMreadExt()' and 'NMreadParText()'.
Value
data.frame relating parameters to variable names
Replace data file used in Nonmem control stream
Description
Replace data file used in Nonmem control stream
Usage
NMreplaceDataFile(files, file.pattern, dir, path.data, newfile = file.mod, ...)Arguments
files | Paths to input control streams to modify. Seefile.pattern and dir too. |
file.pattern | A pattern to look for if 'dir' is supplied too(and not 'file.mod'). This is used to modify multiple inputcontrol streams at once. |
dir | Directory in which to look for 'file.pattern'. Notice,use either just 'file.mod' or both 'dir' and 'file.pattern'. |
path.data | Path to input control stream to use in newfile |
newfile | A path to a new control stream to write to (anddon't edit contents of 'file.mod'). Default is to overwrite 'file.mod'. |
... | Additional arguments to pass to NMwriteSection. |
Value
Lines for a new control stream (invisibly)
See Also
Other Nonmem:NMapplyFilters(),NMextractText(),NMgenText(),NMreadSection(),NMwriteSection()
Automatically find Nonmem input and output tables and organize data
Description
This is a very general solution to automatically identifying, reading, and merging all output and input data in a Nonmem model. The most importantsteps are
Read and combine output tables,
If wanted, read input data and restore variables that were not output from the Nonmem model
If wanted, also restore rows from input data that were disregarded inNonmem (e.g. observations or subjects that are not part of the analysis)
Usage
NMscanData( file, col.row, use.input, merge.by.row, recover.rows, file.mod, dir.data, file.data, translate.input = TRUE, quiet, formats.read, args.fread, as.fun, col.id = "ID", modelname, col.model, col.nmout, col.nmrep, order.columns = TRUE, check.time, tz.lst, skip.absent = FALSE, tab.count, use.rds)Arguments
file | Path to a Nonmem control stream or output file fromNonmem (.mod or .lst) |
col.row | A column with a unique value for each row. Such acolumn is recommended to use if possible. See merge.by.row anddetails as well. Default ("ROW") can be modified usingNMdataConf. |
use.input | Should the input data be added to the outputdata. Only column names that are not found in output data willbe retrieved from the input data. Default is TRUE which can bemodified using NMdataConf. See merge.by.row too. |
merge.by.row | If use.input=TRUE, this argument determinesthe method by which the input data is added to outputdata. The default method (merge.by.row=FALSE) is to interpretthe Nonmem code to imitate the data filtering (IGNORE andACCEPT statements), but the recommended method ismerge.by.row=TRUE which means that data will be merged by aunique row identifier. The row identifier must be present ininput and at least one full length output data table. Seeargument col.row too. |
recover.rows | Include rows from input data files that do notexist in output tables? This will be added to the $row datasetonly, and $run, $id, and $occ datasets are created before thisis taken into account. A column called nmout will be TRUE whenthe row was found in output tables, and FALSE whennot. Default is FALSE and can be configured using NMdataConf. |
file.mod | The input control stream file path. Default is tolook for \"file\" with extension changed to .mod (PSNstyle). You can also supply the path to the file, or you canprovide a function that translates the output file path to theinput file path. The default behavior can be configured usingNMdataConf. See dir.data too. |
dir.data | The data directory can only be read from thecontrol stream (.mod) and not from the output file (.lst). Soif you only have the output control stream, use dir.data totell in which directory to find the data file. If dir.data isprovided, the .mod file is not used at all. |
file.data | Specification of the data file path. When this isused, the control streams are not used at all. |
translate.input | Default is TRUE, meaning that input datacolumn names are translated according to $INPUT section inNonmem listing file. |
quiet | The default is to give some information along the wayon what data is found. But consider setting this to TRUE fornon-interactive use. Default can be configured usingNMdataConf. |
formats.read | Prioritized input data file formats to lookfor and use if found. Default is c("rds","csv") which means |
args.fread | List of arguments passed to when reading _input_data. Notice that except for "input" and "file", you need tosupply all arguments to fread if you use thisargument. Default values can be configured using NMdataConf. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
col.id | The name of the subject ID variable, default is"ID". |
modelname | The model name to be stored if col.model is notNULL. If not supplied, the name will be taken from the controlstream file name by omitting the directory/path and deletingthe .lst extension (path/run001.lst becomes run001). This canbe a character string or a function which is called on thevalue of file (file is another argument to NMscanData). Thefunction must take one character argument and return anothercharacter string. As example, see NMdataConf()$modelname. Thedefault can be configured using NMdataConf. |
col.model | A column of this name containing the model namewill be included in the returned data. The default is to storethis in a column called "model". See argument "modelname" aswell. Set to NULL if not wanted. Default can be configuredusing NMdataConf. |
col.nmout | A column of this name will be a logicalrepresenting whether row was in output table or not. Defaultcan be modified using NMdataConf. |
col.nmrep | If tables are repeated, include a counter? Itdoes not relate to the order of the $TABLE statements but tocases where a $TABLE statement is run repeatedly. E.g., incombination with the SUBPROBLEMS feature in Nonmem, it isuseful to keep track of the table (repetition) number. Ifcol.nmrep is TRUE, this will be carried forward and added as acolumn called NMREP. This is default behavior when more thanone $TABLE repetition is found in data. Set it to a differentstring to request the column with a different name. Theargument is passed to NMscanTables. |
order.columns | If TRUE (default), NMorderColumns is used toreorder the columns before returning the data. NMorderColumnswill be called with alpha=FALSE, so columns are not sortedalphabetically. But standard Nonmem columns like ID, TIME, andother will be first. If col.row is used, this will be passedto NMorderColumns too. |
check.time | If TRUE (default) and if input data is used,input control stream and input data are checked to be newerthan output control stream and output tables. These areimportant assumptions for the way information is merged byNMscanData. However, if data has been transferred from anothersystem where Nonmem was run, these checks may not make sense,and you may not want to see these warnings. The default can beconfigured using NMdataConf. For the output control stream,the time stamp recorded by Nonmem is used if possible, and ifthe input data is created with NMwriteData, the recordedcreation time is used if possible. If not, and for all otherfiles, the file modification times are used. |
tz.lst | If supplied, the timezone to be used when readingthe time stamp in the output control stream. Please supplysomething listed in OlsonNames(). Can be configured usingNMdataConf() too. |
skip.absent | Skip missing output table files with a warning?Default is FALSE in which case an error is thrown. |
tab.count | Deprecated. Use |
use.rds | Deprecated - use |
Details
This function makes it very easy to collect the data froma Nonmem run.
A useful feature of this function is that it can automaticallycombine "input" data (the data read by Nonmem in $INPUT or$INFILE) with "output" data (tables written by Nonmem in$TABLE). There are two implemented methods for doing so. One (thedefault but not recommended) relies on interpretation of filter(IGNORE and ACCEPT) statements in $INPUT. This will work in mostcases, and checks for consistency with Nonmem results. However,the recommended method is using a unique row identifier in bothinput data and at least one output data file (not a FIRSTONLY orLASTONLY table). Supply the name of this column using the col.rowargument.
Limitations. A number of Nonmem features are not supported. Mostof this can be overcome by using merge.by.row=TRUE. Incompletelist of known limitations:
- character TIME
If Nonmem is used to translate DAY and a character TIME column, TIME has to be available in an output table. NMscanData does not do the translation to numeric.
- RECORDS
The RECORDS option to limit the part of the input data being used is not searched for. Using merge.by.row=TRUE will work unaffectedly.
- NULL
The NULL argument to specify missing value string in input data is not respected. If delimited input data is read (as opposed to rds files), missing values are assumed to be represented by dots (.).
Value
A data set of class 'NMdata'.
See Also
Other DataRead:NMreadCsv(),NMreadTab(),NMscanInput(),NMscanTables()
Examples
## Not run: res1 <- NMscanData(system.file("examples/nonmem/xgxr001.lst", package="NMdata"))## End(Not run)Find and read input data and optionally translate column namesaccording to the $INPUT section
Description
This function finds and reads the input data based on a controlstream file path. It can align the column names to the definitionsin $INPUT in the control stream, and it can subset the data basedon ACCEPT/IGNORE statements in $DATA. It supports a few other waysto identify the input data file than reading the control stream,and it can also read an rds or fst file instead of the delimitedtext file used by Nonmem.
Usage
NMscanInput( file, formats.read, file.mod, dir.data = NULL, file.data = NULL, apply.filters = FALSE, translate = TRUE, recover.cols = TRUE, details = TRUE, col.id = "ID", col.row, quiet, args.fread, invert = FALSE, modelname, col.model, as.fun, applyFilters, use.rds)Arguments
file | a .lst (output) or a .mod (input) control streamfile. The filename does not need to end in .lst. It isrecommended to use the output control stream because itreflects the model as it was run rather than how it is plannedfor next run. However, see file.mod and dir.data. |
formats.read | Prioritized input data file formats to lookfor and use if found. Default is c("rds","csv") which means |
file.mod | The input control stream file path. Default is tolook for \"file\" with extension changed to .mod (PSNstyle). You can also supply the path to the file, or you canprovide a function that translates the output file path to theinput file path. If dir.data is missing, the input controlstream is needed. This is because the .lst does not containthe path to the data file. The .mod file is only used forfinding the data file. How to interpret the datafile is readfrom the .lst file. The default can be configured usingNMdataConf. See dir.data too. |
dir.data | The data directory can only be read from thecontrol stream (.mod) and not from the output file (.lst). Soif you only have the output file, use dir.data to tell inwhich directory to find the data file. If dir.data isprovided, the .mod file is not used at all. |
file.data | Specification of the data file path. When this isused, the control streams are not used at all. |
apply.filters | If TRUE (default), IGNORE and ACCEPTstatements in the Nonmem control streams are applied beforereturning the data. This affects what rows are returned, notcolumns. |
translate | If TRUE (default), data columns are named asinterpreted by Nonmem (in '$INPUT'). See details. |
recover.cols | recover columns that were not used in theNonmem control stream? This means adding column from the inputdata file that are not used in '$INPUT'. If data file containsmore columns than mentioned in '$INPUT', these will be namedas in data file (if data file contains named variables). Thisaffects what columns are returned, not rows. |
details | If TRUE, metadata is added to output. In this case,you get a list. Typically, this is mostly useful ifprogramming up functions which behavior must depend onproperties of the output. See details. |
col.id | The name of the subject ID column. Optional and onlyused to calculate number of subjects in data. Default ismodified by NMdataConf. |
col.row | The name of the row counter column. Optional andonly used to check whether the row counter is in the data. |
quiet | Default is to inform a little, but TRUE is useful fornon-interactive stuff. |
args.fread | List of arguments passed to fread. Notice thatexcept for "input" and "file", you need to supply allarguments to fread if you use this argument. Default valuescan be configured using 'NMdataConf()'. |
invert | If TRUE, the data rows that are dismissed by theNonmem data filters (ACCEPT and IGNORE) and only this will bereturned. Only used if 'apply.filters' is 'TRUE'. |
modelname | Only affects meta data table. The model name tobe stored if col.model is not NULL. If not supplied, the namewill be taken from the control stream file name by omittingthe directory/path and deleting the .lst extension(path/run001.lst becomes run001). This can be a characterstring or a function which is called on the value of file(file is another argument to NMscanData). The function musttake one character argument and return another characterstring. As example, see NMdataConf()$modelname. The defaultcan be configured using NMdataConf. |
col.model | Only affects meta data table. A column of thisname containing the model name will be included in thereturned data. The default is to store this in a column called"model". See argument "modelname" as well. Set to NULL if notwanted. Default can be configured using NMdataConf. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
applyFilters | Deprecated - use apply.filters. |
use.rds | Deprecated - use |
Details
Columns that are dropped (using 'DROP' or 'SKIP' in'$INPUT') in the model will be included in the output.
Renamed columns if the first column is called SUBJID inthe data set but the control stream says '$INPUT ID...', the firstcolumn in the resultin data set will be called 'ID', not 'SUBJID'.
Copied columns If the first column is called SUBJID inthe data set but the control stream says '$INPUT SUBJID=ID...',the first column in the resulting data set will be called'ID'. 'SUBJID' will be included to the right of the othervariables defined in '$INPUT'. Normally, Nonmem should only allowcopying if one of the created variable names is one of thereserved data labels, such as 'ID', 'DV', 'AMT',etc. 'NMtransInp()' will prioritize the reserved labels and usethose first, and put the non-recognized name to the right.
Dropped columns Dropped columns are recreated. If thefirst column is called SUBJID in the data set but the controlstream says '$INPUT ID=DROP ID=USUBJID2 ...', the first column inthe resulting data set is called 'ID_DROP' and the second iscalled 'ID'. 'USUBJID2' will also be included as alreadydescribed.
Unique column names With options to DROP variables,rename variables copy variables, and recover variables from thedata set on file, there are many ways duplicate variable names canbe introduced. NMtransInp is supposed to avoid duplicate columnnames. The way it does so, it prioritizes variables to keep theiroriginal name based on a few criteria.
A variable defined in INPUT is prioritized. If one is dropped sayand then introduced with a new variable, say '$INPUT DV=DROPOBS=DV', you will get column names 'DV_DROP', 'DV' and 'OBS' (obswill come further to the right if more variables are defined in'$INPUT'). Also, say the data file now also contains a variablecalled DV further to the right that was never read by'$INPUT'. That variable will be included called 'DV_FILE' because'DV' is already taken. If needed, variables will be numbered, say'DV_FILE', 'DV_FILE2', etc.
Value
A data set, class defined by 'as.fun'
See Also
Other DataRead:NMreadCsv(),NMreadTab(),NMscanData(),NMscanTables()
Run NMscanData on multiple models and stack results
Description
Useful function for meta analyses when multiple models are storedin one folder and can be read with NMscanData using the samearguments.
Usage
NMscanMultiple(files, dir, file.pattern, as.fun, ...)Arguments
files | File paths to the models (control stream) toedit. See file.pattern too. |
dir | The directory in which to find the models. Passed tolist.files(). See file.pattern argument too. |
file.pattern | The pattern used to match the filenames toread with NMscanData. Passed to list.files(). If |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
... | Additional arguments passed to NMscanData. |
Value
All results stacked, class as defined by as.fun
Examples
## Not run: res <- NMscanMultiple(dir=system.file("examples/nonmem", package="NMdata"),file.pattern="xgxr01.*\\.lst",as.fun="data.table")res.mean <- res[,.(meanPRED=exp(mean(log(PRED)))),by=.(model,NOMTIME)]library(ggplot2)ggplot(res.mean,aes(NOMTIME,meanPRED,colour=model))+geom_line()## End(Not run)Find and read all output data tables in Nonmem run
Description
Find and read all output data tables in Nonmem run
Usage
NMscanTables( file, as.fun, quiet, col.nmrep = TRUE, col.tableno = FALSE, col.id = "ID", col.row, details, skip.absent = FALSE, meta.only = FALSE, modelname, col.model)Arguments
file | the Nonmem file to read (normally .mod or .lst) |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
quiet | The default is to give some information along the wayon what data is found. But consider setting this to TRUE fornon-interactive use. Default can be configured usingNMdataConf. |
col.nmrep | col.nmrep If tables are repeated, include acounter? It does not relate to the order of the $TABLEstatements but to cases where a $TABLE statement is runrepeatedly. E.g., in combination with the SUBPROBLEMS featurein Nonmem, it is useful to keep track of the table(repetition) number. If col.nmrep is TRUE, this will becarried forward and added as a column called NMREP. This isdefault behavior when more than one $TABLE repetition is foundin data. Set it to a different string to request the columnwith a different name. The argument is passed to NMscanTables. |
col.tableno | Nonmem includes a counter of tables in thewritten data files. These are often not useful. However, ifcol.tableno is TRUE (not default), this will be carriedforward and added as a column called NMREP. Even if NMREP isgenerated by NMscanTables, it is treated like any other tablecolumn in meta (?NMinfo) data. |
col.id | name of the subject ID column. Used for calculationof the number of subjects in each table. |
col.row | The name of the row counter column. Optional andonly used to check whether the row counter is in the data. |
details | If TRUE, metadata is added to output. In this case,you get a list. Typically, this is mostly useful ifprogramming up functions which behavior must depend onproperties of the output. |
skip.absent | Skip missing output table files with a warning?Default is FALSE in which case an error is thrown. |
meta.only | If TRUE, tables are not read, only a table isreturned showing what tables were found and some availablemeta information. Notice, not all meta information (e.g.,dimensions) are available because the tables need to be readto derive that. |
modelname | Only affects meta data table. The model name tobe stored if col.model is not NULL. If not supplied, the namewill be taken from the control stream file name by omittingthe directory/path and deleting the .lst extension(path/run001.lst becomes run001). This can be a characterstring or a function which is called on the value of file(file is another argument to NMscanData). The function musttake one character argument and return another characterstring. As example, see NMdataConf()$modelname. The defaultcan be configured using NMdataConf. |
col.model | Only affects meta data table. A column of thisname containing the model name will be included in thereturned data. The default is to store this in a column called"model". See argument "modelname" as well. Set to NULL if notwanted. Default can be configured using NMdataConf. |
Value
A list of all the tables as data.frames. If details=TRUE,this is in one element, called data, and meta is anotherelement. If not, only the data is returned.
See Also
Other DataRead:NMreadCsv(),NMreadTab(),NMscanData(),NMscanInput()
Examples
tabs1 <- NMscanTables(system.file("examples/nonmem/xgxr001.lst", package="NMdata"))stamp a dataset or any other object
Description
Dataset metadata can be valuable, eg. by tracing an archived datasetback to the code that generated it. The metadata added byNMstamp can be accessed using the function NMinfo.
Usage
NMstamp(data, script, time = Sys.time(), ...)Arguments
data | The dataset to stamp. |
script | path to the script where the dataset was generated. |
time | the time stamp to attach. Default is to use cpu clock. |
... | other named metadata elements to add to the dataset. Example:Description="PK data for phase 1 trials in project". |
Details
NMstamp modifies the meta data by reference. See example.
Value
data with meta data attached. Class unchanged.
See Also
NMinfo
Other DataCreate:NMorderColumns(),NMwriteData(),addTAPD(),findCovs(),findVars(),flagsAssign(),flagsCount(),mergeCheck(),tmpcol()
Examples
x=1NMstamp(x,script="example.R",description="Example data")NMinfo(x)translate the column names according to the $INPUT section of a control stream
Description
translate the column names according to the $INPUT section of a control stream
Usage
NMtransInp( data, file, lines, translate = TRUE, recover.cols = TRUE, quiet = FALSE)Arguments
data | the data to translate |
file | the list file or control stream |
translate | Do translation according to Nonmem code or not(default 'TRUE')? If not, an overview of column names in dataand in Nonmem code is still returned with the data. |
recover.cols | recover columns that were not used in theNONMEM control stream? Default is TRUE. Can only be negativewhen translate=FALSE. |
quiet | Suppress warnings about data columns? |
Details
If 'translate=FALSE', data is returned with column namesas in data file (not informed by the control stream '$INPUT'section). If 'translate=TRUE', 'NMtransInp' renames and copiescolumns as specified in '$INPUT'. This means that
Value
data with column names translated as specified by nonmemcontrol stream. Class same as for 'data' argument. Classdata.table.
Write dataset for use in Nonmem (and R)
Description
Instead of trying to remember the arguments to pass to write.csv,use this wrapper. It tells you what to write in $DATA and $INPUTin Nonmem, and it (additionally) exports an rds file aswell which is highly preferable for use in R. It never edits thedata before writing the datafile. The filenames for csv, rdsetc. are derived by replacing the extension to the filename givenin the file argument.
Usage
NMwriteData( data, file, formats.write = c("csv", "rds"), script, args.stamp, args.fwrite, args.rds, args.RData, args.write_fst, quiet, args.NMgenText, csv.trunc.as.nm = FALSE, genText, save = TRUE, write.csv, write.rds, write.RData, nm.drop, nmdir.data, col.flagn, nm.rename, nm.copy, nm.capitalize, allow.char.TIME)Arguments
data | The dataset to write to file for use in Nonmem. |
file | The file to write to. The extension (everything afterand including last ".") is dropped. csv, rds and otherstandard file name extensions are added. |
formats.write | character vector of formats.write. Default isc("csv","rds"). "fst" is possible too. Default can be modifiedwith |
script | If provided, the object will be stamped with thisscript name before saved to rds or RData. See ?NMstamp. |
args.stamp | A list of arguments to be passed to NMstamp. |
args.fwrite | List of arguments passed to fwrite. Notice thatexcept for "x" and "file", you need to supply all arguments tofwrite if you use this argument. Default values can beconfigured using NMdataConf. |
args.rds | A list of arguments to be passed to saveRDS. |
args.RData | A list of arguments to be passed to save. Pleasenote that writing RData is deprecated. |
args.write_fst | An optional list of arguments to be passedto write_fst. |
quiet | The default is to give some information along the wayon what data is found. But consider setting this to TRUE fornon-interactive use. Default can be configured usingNMdataConf. |
args.NMgenText | List of arguments to pass to NMgenText - thefunction that generates text suggestion for INPUT and DATAsections in the Nonmem control stream. You can use thesearguments to get a text suggestion you an use directly inNonmem - and |
csv.trunc.as.nm | If TRUE, csv file will be truncatedhorizontally (columns will be dropped) to match the $INPUTtext generated for Nonmem (genText must be TRUE for thisoption to be allowed). This can be a great advantage whendealing with large datasets that can create problems inparallellization. Combined with write.rds=TRUE, the full dataset will still be written to an rds file, so this can be usedwhen combining output and input data when reading modelresults. This is done by default by NMscanData. This meanswriting a lean (narrow) csv file for Nonmem while keepingcolumns of non-numeric class like character and factor forpost-processing. |
genText | Run and report results of NMgenText? Default is'TRUE' if a csv file is written, otherwise 'FALSE'. You maywant to disable this if data set is not for Nonmem. |
save | Save defined files? Default is TRUE. If a variable isused to control whether a script generates outputs (say |
write.csv | Write to csv file? Deprecated, use'formats.write' instead. |
write.rds | write an rds file? Deprecated, use'formats.write' instead. |
write.RData | Deprecated and not recommended - will beremoved. RData is not a adequate format for a dataset (but isfor environments). Please use write.rds instead. |
nm.drop | Deprecated, useargs.NMgenText=list(drop=c("column")) instead. |
nmdir.data | Deprecated, useargs.NMgenText=list(dir.data="your/path") instead. |
col.flagn | Deprecated, useargs.NMgenText=list(col.flagn="column.name"). Name of anumeric column with zero value for rows to include in Nonmemrun, non-zero for rows to skip. The argument is only used forgenerating the proposed $DATA text to paste into the Nonmemcontrol stream. To skip this feature, use 'col.flagn=FALSE'. |
nm.rename | Deprecated, useargs.NMgenText=list(rename=c(newname="existing")) instead. |
nm.copy | Deprecated, useargs.NMgenText=list(copy=c(newname="existing")) instead. |
nm.capitalize | Deprecated, useargs.NMgenText=list(capitalize=TRUE) instead. |
allow.char.TIME | Deprecated, useargs.NMgenText=list(allow.char.TIME=TRUE) instead. |
Details
When writing csv files, the file will becomma-separated. Because Nonmem does not support quotedfields, you must avoid commas in character fields. An error isreturned if commas are found in strings.
The user is provided with text to use in Nonmem. This lists namesof the data columns. Once a column is reached that Nonmem will notbe able to read as a numeric and column is not in nm.drop, the listis stopped. Only exception is TIME which is not tested for whethercharacter or not.
Value
Text for inclusion in Nonmem control stream, invisibly.
See Also
Other DataCreate:NMorderColumns(),NMstamp(),addTAPD(),findCovs(),findVars(),flagsAssign(),flagsCount(),mergeCheck(),tmpcol()
Replace ($)sections of a Nonmem control stream
Description
Just give the section name, the new lines and the file path, and the"$section", and the input to Nonmem will be updated.
Usage
NMwriteSection( files, file.pattern, dir, section, newlines, list.sections, location = "replace", newfile, backup = TRUE, blank.append = TRUE, data.file, write = TRUE, quiet, simplify = TRUE)Arguments
files | File paths to the models (control stream) toedit. See file.pattern too. |
file.pattern | Alternatively to files, you can supply aregular expression which will be passed to list.files as thepattern argument. If this is used, use 'dir' argument aswell. Also see data.file to only process models that use aspecific data file. |
dir | If file.pattern is used, 'dir' is the directory to searchin. |
section | The name of the section to update with or without"$". Example: 'section="EST"' or 'section="$EST"' to edit thesections starting by '$EST'. Section specification is notcase-sensitive. See '?NMreadSection' too. |
newlines | The new text (including "$SECTION"). Better bebroken into lines in a character vector since this is simplypast to |
list.sections | Named list of new sections, each elementcontaining a section. Names must be section names, contents ofeach element are the new section lines for each section. |
location | In combination with 'section', this determineswhere the new section is inserted. Possible values are"replace" (default), "before", "after", "first", "last". |
newfile | path and filename to new run. If missing, theoriginal file (from |
backup | In case you are overwriting the old file, do youwant to backup the file (to say, backup_run001.mod)? |
blank.append | Append a blank line to output? |
data.file | Use this to limit the scope of models to thosethat use a specific input data data file. The string has toexactly match the one in '$DATA' or '$INFILE' in Nonmem. |
write | Default is to write to file. If write=FALSE,'NMwriteSection()' returns the resulting input.txt without writingit to disk. Default is 'TRUE'. |
quiet | The default is to give some information along the wayon what data is found. But consider setting this to TRUE fornon-interactive use. Default can be configured using'NMdataConf()'. |
simplify | If TRUE (default) and only one file is edited, theresulting rows are returned directly. If more than one file isedited, the result will always be a list with one element perfile. |
Details
The new file will be written with unix-style lineendings.
Value
The new section text is returned. If write=TRUE, this isdone invisibly.
See Also
Other Nonmem:NMapplyFilters(),NMextractText(),NMgenText(),NMreadSection(),NMreplaceDataFile()
Examples
newlines <- "$EST POSTHOC INTERACTION METHOD=1 NOABORT PRINT=5 MAXEVAL=9999 SIG=3"NMwriteSection(files=system.file("examples/nonmem/xgxr001.mod", package = "NMdata"),section="EST", newlines=newlines,newfile=NULL)## Not run: text.nm <- NMwriteData(data)NMwriteSection(dir="nonmem", file.pattern="^run.*\\.mod", list.sections=text.nm["INPUT"])## End(Not run)Add blocking info to parameter set
Description
Add blocking info to parameter set
Usage
addBlocks(pars, col.model = "model")Arguments
pars | The parameter, as returned by 'NMreadExt()' |
col.model | Name of the model name column. |
add correlations of off-diagonal OMEGA and SIGMA elements to a parameter table
Description
add correlations of off-diagonal OMEGA and SIGMA elements to a parameter table
Usage
addCor(pars, by = NULL, as.fun, col.value = "value")Arguments
pars | A parameter table, like returned by 'NMreadExt()'. |
by | The name of a column, as a string. Calculate thecorrelations within a grouping variable? This will often be acolumn containing the model name. |
as.fun | See '?NMdataConf' |
col.value | The name of the column from which to take the'OMEGA' values. Default is "value" in alignment with theoutput from 'NMreadExt()'. |
Value
The parameter table with a 'corr' column added.
Deprecated: use addCor. Add correlations to parameter table
Description
Anything arguments are passed to 'addCor()'. See '?addCor()'.
Usage
addOmegaCorr(...)Arguments
... | Passed to addCor |
Value
The parameter table with a 'corr' column added.
Fill parameter names indexes in a data set
Description
Add par.type, i, j to a data.table that has parameter already
Usage
addParType(pars, suffix, add.idx, overwrite = FALSE)Arguments
pars | Table of parameters to augment with additional columns |
suffix | Optional string to add to all new columnnames. Maybe except 'i' and 'j'. |
add.idx | Add 'i' and 'j'? Default is 'TRUE' if no suffix is supplied, and 'FALSE' if a suffix is specified. |
overwrite | Overwrite non-missing values? Default is 'FALSE'. |
Details
'addParType()' fills in data sets of Nonmem parameter values to include the following variables (columns):
parameter: THETA1 , OMEGA(1,1), SIGMA(1,1), OBJ, SAEMOBJ
par.name: THETA(1), OMEGA(1,1), SIGMA(1,1), OBJ, SAEMOBJ
par.type THETA, OMEGA, SIGMA, OBJ
i: 1, 1, 1, NA, NA (No indexes for OBJ)
i: NA, 1, 1, NA, NA (j not defined for THETA)
As a last step, addParameter is called with overwrite=FALSE. Thisfills parameter and par.name. Combined, if parameter is in pars, it is used. If not, par.type, i, and j are used.
In the provided data set, parameter is allowed to have thetas asTHETA(1) (the par.name format). These will however be overwrittenwith the described format above.
add parameter based on par.type and i,j
Description
Columns filled or overwritten: parameter, par.name.
Usage
addParameter(pars, overwrite = FALSE)Arguments
pars | Table of parameters to augment with additional columns. |
overwrite | Overwrite non-missing values? Default is 'FALSE'. |
Create a variable in inital value table to keep track of SAMEblocks i.e. parameters that are part of a single distribution
Description
Create a variable in inital value table to keep track of SAMEblocks i.e. parameters that are part of a single distribution
Usage
addSameBlocks(inits)Arguments
inits | Table of initial values as created by NMreadInits(). |
Details
sameblock:
if not part of a distribution repeated using SAME: 0
if part of a distribution repeated using SAME: counter (1,2,...)of the unique distribution blocks that are being reused.
Nsameblock: The number of SAME calls used for a distributionblock. If SAME(N) notation is used, Nsameblock=N.
Author(s)
Brian Reilly
Add time since previous dose to data, time of previous dose, mostrecent dose amount, cumulative number of doses, and cumulativedose amount.
Description
For now, doses have to be in data as EVID=1 and/or EVID=4records. They can be in the format of one row per dose or repeateddosing notation usingADDL andII.
Usage
addTAPD( data, col.id, col.time, col.evid = "EVID", col.amt = "AMT", col.tpdos = "TPDOS", col.tapd = "TAPD", col.pdosamt = "PDOSAMT", col.doscuma = "DOSCUMA", col.doscumn = "DOSCUMN", prefix.cols, suffix.cols, subset.dos, subset.is.complete, order.evid = c(3, 0, 2, 4, 1), by, SDOS = 1, quiet, as.fun, col.ndoses)Arguments
data | The data set to add the variables to. |
col.id | The name of the column with the subjectidentifier. All calculations are by default done by subject,so this column name must be provided. Default is controlled by'?NMdataConf()'. |
col.time | Name of time column on which calculations ofrelative times will be based. Default it |
col.evid | The name of the event ID column. This must existin data. Default is EVID. |
col.amt | col.evid The name of the dose amount column. Thismust exist in data. Default is AMT. |
col.tpdos | Name of the time of previous dose column (createdby |
col.tapd | Name of the time of previous dose column (createdby |
col.pdosamt | The name of the column to be created holdingthe previous dose amount. Set to NULL to not create thiscolumn. |
col.doscuma | The name of the column to be created holdingthe cumulative dose amount. Set to NULL to not create thiscolumn. |
col.doscumn | The name of the column (created by addTAPD)that holds the cumulative number of doses administered to thesubject. Set to NULL to not create this column. |
prefix.cols | String to be prepended to all generated columnnames, that is each of col.tpdos, col.tapd, col.ndoses,col.pdosamt, col.doscuma that are not NULL. |
suffix.cols | String to be appended to all generated columnnames, that is each of col.tpdos, col.tapd, col.ndoses,col.pdosamt, col.doscuma that are not NULL. |
subset.dos | A string that will be evaluated as a customexpression to identify relevant events. See subset.is.completeas well. |
subset.is.complete | Only used in combination withnon-missing subset.dos. By default, subset.dos is used inaddition to the impact of col.evid (must be 1 or 4) andcol.amt (greater than zero). If subset.is.complete=TRUE,subset.dos is used alone, and col.evid and col.amt arecompletely ignored. This is typically useful if the events arenot doses but other events that are not expressed as a typicaldose combination of EVID and AMT columns. |
order.evid | Order of events. This will only matter if thereare simultaneous events of different event types withinsubjects. Typically if using nominal time, it may be importantto specify whether samples at dosing times are pre-dosesamples. The default is 'c(3,0,4,1,2)' - i.e. samples andsimulations are pre-dose. See details. |
by | Columns to do calculations within. Default is ID. |
SDOS | Scaling value for columns related to dose amount,relative to AMT values. col.pdosamt and col.doscuma areaffected and will be derived as AMT/SDOSE. |
quiet | Suppress messages? Default can be set using 'NMdataConf()'. |
as.fun | The default is to return data as a data.frame. Passa function (say 'tibble::as_tibble') in as.fun to convert tosomething else. If data.tables are wanted, use'as.fun="data.table"'. The default can be configured usingNMdataConf. |
col.ndoses | Deprecated. Use col.doscumn instead. |
Details
addTAPD does not require the data to be ordered, and itwill not order it. This means you can run addTAPD beforeordering data (which may be one of the final steps) in dataset preparation. The argument called order.evid is importantbecause of this. If a dosing event and a sample occur at thesame time, when which dose was the previous for that sample?Default is to assume the sample is a pre-dose sample, andhence output will be calculated in relation to the dosebefore. If no dose event is found before, NA's will beassigned.
Value
A data.frame with additional columns
See Also
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),findCovs(),findVars(),flagsAssign(),flagsCount(),mergeCheck(),tmpcol()
Convert object to class NMctl
Description
Convert object to class NMctl
Usage
as.NMctl(x, ...)Arguments
x | object to convert |
... | Not used |
Value
An object of class 'NMctl'.
Create character vectors without quotation marks
Description
When creating character vectors with several elements, it becomesa lot of quotes to type. cc provides a simple way to skip thequotes - but only for simple strings.
Usage
cc(...)Arguments
... | The unquoted names that will become character values inthe returned vector. |
Details
Don't use cc with any special characters - onlyalphanumerics and no spaces supported. Also, remember thatnumerics are converted using as.character. Eg, this means thatleading zeros are dropped.
Value
A character vector
See Also
cl
Examples
cc(a,b,`a b`)cc(a,b,"a b")## be careful with spaces and special characterscc( d)cc(" d")cc()## Numerics are converted using as.charactercc(001,1,13e3)check that col.row is not edited in Nonmem control stream
Description
In order to safely merge by a unique row identifier,that row identifier must not be edited from input tooutput. checkColRow helps checking that based on the controlstream.
Usage
checkColRow(col.row, file)Arguments
col.row | The name of the unique row identifier (say "ROW"). |
file | a list file or input control stream file path. |
Value
TRUE if no issues found
Define a vector with factor levels in the same order as occurring in the vector.
Description
This is a shortcut for creating factors with levels as the orderof appearance of the specified levels.
Usage
cl(...)Arguments
... | unique elements or vectors with unique elements |
Value
A factor (vector)
See Also
cc
Examples
factor("b","a")cl("b","a")x <- c("b","a")factor(x)cl(x)Drop leading, trailing and repeated spaces in character strings
Description
Drop leading, trailing and repeated spaces in character strings
Usage
cleanSpaces(x, double = TRUE, lead = TRUE, trail = TRUE)Arguments
x | A vector of character strings to modify |
double | Replace any number of consecutive blank spaces by asingle blank. Default is TRUE. |
lead | Drop spaces before first non-empty character. Defaultis TRUE. |
trail | Drop spaces after last non-empty character. Defaultis TRUE. |
Value
A vector of class character
Extract column labels as defined in SAS
Description
Extract column labels as defined in SAS
Usage
colLabels(x, sort = "alpha")Arguments
x | object with elements containing label attributes. |
sort | If sort="alpha", results are sorted alphabetically. |
Value
A data.frame with variable and their labels
See Also
compareCols NMinfo
Compare elements in lists with aim of combining
Description
Useful interactive tool when merging or binding objectstogether. It lists the names of elements that differ in presenceor class across multiple datasets. Before running rbind, you may wantto check the compatibility of the data.
Usage
compareCols( ..., list.data, keep.names = TRUE, test.equal = FALSE, diff.only = TRUE, cols.wanted, fun.class = base::class, quiet, as.fun, keepNames, testEqual)Arguments
... | objects which element names to compare |
list.data | As alternative to ..., you can supply the datasets in a list here. |
keep.names | If TRUE, the original dataset names are used inreported table. If not, generic x1, x2,... are used. Thelatter may be preferred for readability. |
test.equal | Do you just want a TRUE/FALSE to whether thenames of the two objects are the same? Default is FALSE whichmeans to return an overview for interactive use. You mightwant to use TRUE in programming. However, notice that thischeck may be overly rigorous. Many classes are compatibleenough (say numeric and integer), and compareCols doesn't takethis into account. |
diff.only | If TRUE, don't report columns where no differencefound. Default is TRUE if number of data sets supplied isgreater than one. If only one data set is supplied, the fulllist of columns is shown by default. |
cols.wanted | Columns of special interest. These will alwaysbe included in overview and indicated by a prepended * to thecolumn names. This argument is often useful when you start bydefining a set of columns that you want to end up with bycombining a number of data sets. |
fun.class | the function that will be run on each column tocheck for differences. |
quiet | The default is to give some information along the wayon what data is found. But consider setting this to TRUE fornon-interactive use. Default can be configured usingNMdataConf. |
as.fun | A function that will be run on the result beforereturning. If first input data set is a data.table, thedefault is to return a data.table, if not the default is toreturn a data.frame. Use whatever to get what fits in withyour workflow. Default can be configured with NMdataConf. |
keepNames | Deprecated. Use keep.names instead. |
testEqual | Deprecated. Use test.equal instead. |
Details
technically, this function compares classes of elementsin lists. However, in relation to NMdata, this will most ofthe time be columns in data.frames.
Despite the name of the argument fun.class, it can be any functionto be evaluated on each element in '...'. See examples for how toextract SAS labels on an object read with 'read_sas' from the'haven' package.
Value
A data.frame with an overview of elements and theirclasses of objects in ... Class as defined by as.fun.
See Also
Other DataWrangling:dims(),listMissings()
Examples
## get SAS labels from objects read with haven::read_sas## Not run: compareCols(...,fun.class=function(x)attributes(x)$label)## End(Not run)Assign i and j indexes based on parameter section text
Description
Internal function used by NMreadInits()
Usage
count_ij(res)Arguments
res | elements as detected by 'NMreadInits()' |
A standard-evaluation interface to 'data.table::dcast()'
Description
A standard-evaluation interface to 'data.table::dcast()'
Usage
dcastSe(data, l, r, ...)Arguments
data | data set to transpose (widen) |
l | left-hand side variables as character vector. Result willbe long/vertical in these variables. |
r | left-hand side variables as character vector. Result willbe wide in these variables. |
... | Additional arguments passed to 'data.table::dcast()'. |
Report if an argument is deprecated.
Description
Only supposed to be called from within a function. For now onlyworks for arguments that have been replaced by others.
Usage
deprecatedArg(oldarg, newarg, args, msg = NULL, which = 2)Arguments
oldarg | The deprecated argument name (a character string). |
newarg | The non-deprecated argument name (a characterstring). |
args | List of arguments in the function call to look foroldarg and newarg. See '?getArgs'. If missing, 'getArgs()'will be called from within 'deprecatedArg'. See 'which' too. |
which | If calling 'getArgs' this is passed along, referringto how many environments to jump to look at arguments. |
Value
The coalesced value of arguments
See Also
Other arguments:getArgs()
Examples
## Not run: fun1 <- function(a=1,b=2){ ## b is deprecated a <- deprecatedArg("b","a") a } expect_error( fun1(a=1,b=2) ) expect_message( fun1(b=2) )## End(Not run)Get dimensions of multiple objects
Description
Get dimensions of multiple objects
Usage
dims(..., list.data, keep.names = TRUE, as.fun = NULL, keepNames)Arguments
... | data sets |
list.data | As alternative to ..., you can supply the datasets in a list here. |
keep.names | If TRUE, the original dataset names are used inreported table. If not, generic x1, x2,... are used. Thelatter may be preferred for readability in some cases. |
as.fun | A function that will be run on the result beforereturning. If first input data set is a data.table, thedefault is to return a data.table, if not the default is toreturn a data.frame. Use whatever to get what fits in withyour workflow. Default can be configured with NMdataConf. |
keepNames | Deprecated. Use keep.names instead. |
Value
A data.frame with dimensions of objects in ... Actualclass defined by as.fun.
See Also
Other DataWrangling:compareCols(),listMissings()
Convert a data.table of parameter estimates to a matrix
Description
Often needed when using estimates of Omega or Sigma matrices infurther calculations.
Usage
dt2mat(pars, dt.subset = "unique", max.i, fill = 0, col.value)Arguments
pars | A data.table with parameters. Must contain columns 'i'and 'j' with row and column indexes and 'est' with parameter(matrix) values. |
dt.subset | Specifies whether pars contains only a lower orupper triangle of an assumed symmetric matrix (most often thecase for variance-covariance matrices), or it contains thefull matrix. 'dt.subset="unique"' (default) means that 'pars' onlycontains either upper or lower diagonal matrix (includingdiagonal), 'dt.subset="all"' means 'pars' contains both upperand lower triangles. See details. |
max.i | By default, the maximum row number is derived as hemaximum value in the 'i' column. If more (empty ones) areneeded, specify the maximum row number with 'max.i'. This canbe necessary in cases where only estimated elements areavailable but a full matrix including elements related tofixed parameters is needed. |
fill | Value to insert for missing elements |
col.value | The name of the column from which to take the'OMEGA' values. Default is "value" in alignment with theoutput from 'NMreadExt()'. |
Details
If pars does not contain all 'i' values, they will beimputed with zeros. The desired matrix dimension is inferredfrom 'min(i)' and 'max(i)'. In case 'dt.subset=="unique"'missing 'j' elements will also give imputations of missingelements.
Value
a matrix
Create a data table with columns of unequal length
Description
Create a data table with columns of unequal length
Usage
dtFillCols(...)Arguments
... | Vectors to put into data.table |
Apply function and return a data.frame
Description
A convenience function that returns a data.frame with a columnrepresenting the input values and a column with results. This isstill experimental and will not work for many input structures.
Usage
dtapply( X, FUN, value.names = NULL, element.name = "element", fill = TRUE, as.fun, ...)Arguments
X | Like for 'lapply()', an object to process (typically avector or a list). Passed to 'lapply()'. |
FUN | Function to run for each element in 'X'. Passed to'lapply()'. |
value.names | If supplied, setnames will be run on eachelement returned by lapply wit value.names as the 'new'argument. |
element.name | What to call the column holding the values orthe names of 'X'? Default is "element". What goes into thiscolumn depends on the class of 'X'. If 'X' is a charactervector, it will be the values of 'X'. If 'X' is a list, itwill be the names of the elements in the list. |
fill | Fill resulting data with 'NA's in order to combineinto a single 'data.frame'? Default is TRUE. |
as.fun | The default is to return data as a data.frame. Passa function in as.fun to convert to something else. Ifdata.tables are wanted, use 'as.fun="data.table"'. The defaultcan be configured using 'NMdataConf()'. |
... | arguments passed to lapply |
Details
Only functions that return vectors are currentlysupported. dtapply should support functions that returndata.frames.
Value
a data.table
Replace strings in character character columns of a data set
Description
Replace strings in character character columns of a data set
Usage
editCharCols(data, pattern, replacement, as.fun, ...)Arguments
data | The data set to edit. |
pattern | Pattern to search for in character columns. Passedto 'gsub()'. By default, 'gsub()' works with regularexpressions. See ... for how to disable this if you want toreplace a specific string. |
replacement | pattern or string to replace with. Passed to'gsub()'. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
... | Additional arguments passed to 'gsub()'. Especially,notice fixed=TRUE will disable interpretation of 'pattern' and'replace' as regular expressions. |
Value
a data.frame
Examples
### remove commas from character columns dat <- data.frame(A=1:3,text=cc(a,"a,d","g"))editCharCols(dat,pattern=",","")### factors are not edited but result in an error## Not run: dat <- data.frame(A=1:3,text=cc(a,"a,d",g),fac=cl("a","a,d","g"))editCharCols(dat,pattern=",","")## End(Not run)Expand grid of data.tables
Description
Expand grid of data.tables
Usage
egdt(dt1, dt2, quiet)Arguments
dt1 | a data.table. |
dt2 | another data.table. |
quiet | The default is to give some information along the wayon what data is found. But consider setting this to TRUE fornon-interactive use. Default can be configured usingNMdataConf. |
Details
Merging works mostly similarly for data.table anddata.table. However, for data.table the merge must be done byone or more columns. This means that the convenient way toexpand all combinations of all rows in two data.frames is notavailable for data.tables. This functions provides thatfunctionality. It always returns data.tables.
Value
a data.table that expands combinations of rows in dt1 anddt2.
Examples
df1 <- data.frame(a=1:2,b=3:4)df2 <- data.frame(c=5:6,d=7:8)merge(df1,df2)library(data.table)## This is not possible## Not run: merge(as.data.table(df1),as.data.table(df2),allow.cartesian=TRUE)## End(Not run)## Use egdt insteadegdt(as.data.table(df1),as.data.table(df2),quiet=TRUE)## Dimensions are conveniently listed for interactive useres <- egdt(as.data.table(df1),as.data.table(df2))Clean and standardize file system paths
Description
Use this to tidy up paths. Combines pieces of a path likefile.path(). The function is intended to return a canonicalpath format, i.e. paths that can be compared by simple stringcomparison. Redundant /'s removed. normalizePath is used topossibly shorten path.
Usage
filePathSimple(...)Arguments
... | additional arguments passed to file.path(). |
Value
A (character) file path
Extract columns that vary within values of other columns
Description
This function provides an automated method to extract covariate-likecolumns. The user decides which columns these variables cannot varywithin. So if you have repeated measures for each ID, this function can findthe columns that are constant within ID and their unique values for eachID. Or, you can provide a combination of id.cols, say ID and STUDY, and getvariables that do not vary within unique combinations of these.
Usage
findCovs(data, by = NULL, cols.id, as.fun = NULL)Arguments
data | data.frame in which to look for covariates |
by | covariates will be searched for in combinations of values inthese columns. Often by will be either empty or ID. But it can alsobe both say c("ID","DRUG") or c("ID","TRT"). |
cols.id | Deprecated. Use by instead. |
as.fun | The default is to return a data.table if data is a data.tableand return a data.frame in all other cases. Pass a function in as.fun toconvert to something else. If data is not a data.table, the default canbe configured using NMdataConf. |
Value
a data set with one observation per combination of values ofvariables listed in by.
See Also
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),addTAPD(),findVars(),flagsAssign(),flagsCount(),mergeCheck(),tmpcol()
Examples
dt1=data.frame(ID=c(1,1,2,2), OCC=c(1,2,1,2), ## ID level eta1=c(1,1,3,3), ## occasion level eta2=c(1,3,1,5), ## not used eta3=0 )## model levelfindCovs(dt1)## ID levelfindCovs(dt1,"ID")## acual ID levelfindVars(findCovs(dt1,"ID"))## occasion levelfindCovs(findVars(dt1,"ID"),c("ID","OCC"))## Based on a "real data example"## Not run: dat <- NMscanData(system.file("examples/nonmem/xgxr001.lst", package = "NMdata"))findCovs(dat,by="ID")### Without an ID column we get non-varying columnsfindCovs(dat)## End(Not run)Extract columns that vary within values of other columns in a data.frame
Description
If you want to look at the variability of a number of columns and you wantto disregard those that are constant. Like for findCovs, by can be ofarbitrary length.
Usage
findVars(data, by = NULL, cols.id, as.fun = NULL)Arguments
data | data.frame in which to look for covariates |
by | optional covariates will be searched for in combinations ofvalues in these columns. Often by will be either empty or ID. Butit can also be both say c("ID","DRUG") or c("ID","TRT"). |
cols.id | Deprecated. Use by instead. |
as.fun | The default is to return a data.table if data is a data.tableand return a data.frame in all other cases. Pass a function in as.fun toconvert to something else. If data is not a data.table, the default canbe configured using NMdataConf. |
Details
Use this to exclude columns that are constant within by. Ifby=ID, this could be to get only time-varying covariates.
Value
a data set with as many rows as in data.
See Also
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),addTAPD(),findCovs(),flagsAssign(),flagsCount(),mergeCheck(),tmpcol()
Examples
dt1 <- data.frame(ID=c(1,1,2,2), OCC=c(1,2,1,2), ## ID level eta1=c(1,1,3,3), ## occasion level eta2=c(1,3,1,5), ## not used eta3=0 )## model levelfindCovs(dt1)## ID levelfindCovs(dt1,"ID")## acual ID levelfindVars(findCovs(dt1,"ID"))## occasion levelfindCovs(findVars(dt1,"ID"),c("ID","OCC"))Assign exclusion flags to a dataset based on specified table
Description
The aim with this function is to take a (say PK) dataset and apre-specified table of flags, assign the flags automatically.
Usage
flagsAssign( data, tab.flags, subset.data, col.flagn, col.flagc, flags.increasing = FALSE, grp.incomp = "EVID", flagc.0 = "Analysis set", as.fun = NULL)Arguments
data | The dataset to assign flags to. |
tab.flags | A data.frame containing at least these namedcolumns: FLAG, flag, condition. Condition is disregarded forFLAG==0. FLAG must be numeric and non-negative, flag andcondition are characters. |
subset.data | An optional string that provides a subset ofdata to assign flags to. A common example issubset=\"EVID==0\" to only assign to observations. Numericaland character flags will be missing in rows that are notmatched by this subset. |
col.flagn | The name of the column containing the numericalflag values in tab.flags. This will be added to data. Defaultvalue is FLAG and can be configured using NMdataConf. |
col.flagc | The name of the column containing the characterflag values in tab.flags. This will be added to data. Defaultvalue is flag and can be configured using NMdataConf. |
flags.increasing | The flags are applied by either decreasing(default) or increasing value of col.flagn. Decreasing ordermeans that conditions associated with higher values ofcol.flagn will be evaluated first. By using decreasing order,you can easily adjust the Nonmem IGNORE statement fromIGNORE(FLAG.NE.0) to say IGNORE(FLAG.GT.10) if BLQ's haveFLAG=10, and you decide to include these in the analysis. |
grp.incomp | Column(s) that distinct incompatible subsets ofdata. Default is "EVID" meaning that if different values ofEVID are found in data, the function will return anerror. This is a safeguard not to mix data unintentionallywhen counting flags. |
flagc.0 | The character flag to assign to rows that are notmatched by exclusion conditions (numerical flag 0). |
as.fun | The default is to return data.tables if input datais a data.table, and return a data.frame for all other inputclasses. Pass a function in as.fun to convert to somethingelse. If return.all=FALSE, this is applied to data andtab.flags independently. |
Details
dt.flags must contain a column with numerical exclusionflags, one with character exclusion flags, and one with aexpressions to evaluate for whether to apply the exclusionflag. The flags are applied sequentially, by increasing valueof the numerical exclusion flag.
Value
The dataset with flags added. Class as defined byas.fun. See parameter flags.return as well.
See Also
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),addTAPD(),findCovs(),findVars(),flagsCount(),mergeCheck(),tmpcol()
Examples
## Not run: pk <- readRDS(file=system.file("examples/data/xgxr2.rds",package="NMdata"))dt.flags <- data.frame( flagn=10, flagc="Below LLOQ", condition=c("BLQ==1"))pk <- flagsAssign(pk,dt.flags,subset.data="EVID==0",col.flagn="flagn",col.flagc="flagc")pk <- flagsAssign(pk,subset.data="EVID==1",flagc.0="Dosing", col.flagn="flagn",col.flagc="flagc")unique(pk[,c("EVID","flagn","flagc","BLQ")])flagsCount(pk[EVID==0],dt.flags,col.flagn="flagn",col.flagc="flagc")## End(Not run)Create an overview of number of retained and discarded datapoints.
Description
Generate an overview of number of observations disregarded due todifferent reasons. And how many are left after each exclusionflag.
Usage
flagsCount( data, tab.flags, file, col.id = "ID", col.flagn, col.flagc, by = NULL, flags.increasing = FALSE, flagc.0 = "Analysis set", name.all.data = "All available data", grp.incomp = "EVID", save = TRUE, quiet = FALSE, as.fun = NULL)Arguments
data | The dataset including both FLAG and flag columns. |
tab.flags | A data.frame containing at least these namedcolumns: FLAG, flag, condition. Condition is disregarded forFLAG==0. |
file | A file to write the table of flag counts to. Willprobably be removed and put in a separate function. |
col.id | The name of the subject ID column. Default is "ID". |
col.flagn | The name of the column containing the numericalflag values in tab.flags. This will be added to data. Use thesame as when flagsAssign was called (if that wasused). Default value is FLAG and can be configured usingNMdataConf. |
col.flagc | The name of the column containing the characterflag values in data and tab.flags. Use the same as whenflagsAssign was called (if that was used). Default value isflag and can be configured using NMdataConf. |
by | An optional column to group the counting by. This couldbe "STUDY", "DRUG", "EVID", or a combination of multiplecolumns. |
flags.increasing | The flags are applied by either decreasing(default) or increasing value of col.flagn. By usingdecreasing order, you can easily adjust the Nonmem IGNOREstatement from IGNORE(FLAG.NE.0) to say IGNORE(FLAG.GT.10) ifBLQ's have FLAG=10, and you decide to include these in theanalysis. |
flagc.0 | The character flag to assign to rows that are notmatched by exclusion conditions (numerical flag 0). |
name.all.data | What to call the total set of data beforeapplying exclusion flags. Default is "All available data". |
grp.incomp | Column(s) that distinct incompatible subsets ofdata. Default is "EVID" meaning that if different values ofEVID are found in data, the function will return anerror. This is a safeguard not to mix data unintentionallywhen counting flags. |
save | Save file? Default is TRUE, meaning that a file willbe written if file argument is supplied. |
quiet | Suppress non-critical messages? Default is 'FALSE'. |
as.fun | The default is to return a data.table if input datais a data.table, and return a data.frame for all other inputclasses. Pass a function in as.fun to convert to somethingelse. If data is not a data.table, default can be configuredusing NMdataConf. |
Details
This function is used to count flags as assigned by theflagsAssign function.
Notice that the character flags reported in the output table aretaken from tab.flags. The data column named by the value ofcol.flagc (default is flag) is not used.
In the returned table, N.discarded is the difference in number ofsubjects since previous step. If two is reported, it can meanthat the remaining one observation of these two subjects arediscarded due to this flag. The majority of the samples canhave been discarded by earlier flags.
Value
A summary table with number of discarded and retainedsubjects and observations when applying each condition in theflag table. "discarded" means that the reduction of number ofobservations and subjects resulting from the flag, "retained"means the numbers that are left after application of theflag. The default is "both" which will report both. Class asdefined by as.fun.
See Also
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),addTAPD(),findCovs(),findVars(),flagsAssign(),mergeCheck(),tmpcol()
Examples
## Not run: pk <- readRDS(file=system.file("examples/data/xgxr2.rds",package="NMdata"))dt.flags <- data.frame( flagn=10, flagc="Below LLOQ", condition=c("BLQ==1"))pk <- flagsAssign(pk,dt.flags,subset.data="EVID==0",col.flagn="flagn",col.flagc="flagc")pk <- flagsAssign(pk,subset.data="EVID==1",flagc.0="Dosing", col.flagn="flagn",col.flagc="flagc")unique(pk[,c("EVID","flagn","flagc","BLQ")])flagsCount(pk[EVID==0],dt.flags,col.flagn="flagn",col.flagc="flagc")## End(Not run)paste something before file name extension.
Description
Append a file name like file.mod to file_1.mod or file_pk.mod. Ifit's a number, we can pad some zeros if wanted. The separator(default is underscore) can be modified.
Usage
fnAppend( fn, x, pad0 = 0, sep = "_", collapse = sep, position = "append", allow.noext = FALSE)Arguments
fn | The file name or file names to modify. |
x | A character string or a numeric to add to the filename. If a vector, the vector is collapsed to a single string,using 'sep' as separator in the collapsed string. |
pad0 | In case x is numeric, a number of zeros to pad beforethe appended number. This is useful if you are generating saymore than 10 files, and your counter will be 01, 02,..,10,... and not 1, 2,...,10,... |
sep | The separator between the existing file name (untilextension) and the addition. |
collapse | If 'x' is of length greater than 1, the default isto collapse the elements to a single string using 'sep' asseparator. See the 'collapse' argument to '?paste'. If youwant to treat them as separate strings, use 'collapse=NULL'which will lead to generation of separate file names. However,currently 'fn' or 'x' must be of length 1. |
position | "append" (default) or "prepend". |
allow.noext | Allow 'fn' to be string(s) without extensions?Default is 'FALSE' in which case an error will be thrown if'fn' contains strings without extensions. If 'TRUE', 'x' willbe appended to fn in these cases. |
Value
A character (vector)
Examples
fnAppend("plot.png",1)fnAppend("plot.png",1,pad0=2,sep="-")fnAppend("plot.png","one")fnAppend("plot","one",allow.noext=TRUE)## multiple x gives one collapsed stringfnAppend("plot.png",1:2)fnAppend("plot.png",1:2,pad0=2)Change file name extension
Description
Very simple but often applicable function to retrieve or changethe file name extension (from say file.lst to file.mod)
Usage
fnExtension(fn, ext)Arguments
fn | file name. Often ending in an extension after a periodbut the extension is not needed. |
ext | new file name extension. If omitted or NULL, theextension of fn is returned. |
Value
A text string
Examples
fnExtension("file.lst",".mod")fnExtension("file.lst","mod")fnExtension("file.lst","..mod")fnExtension("file.lst",cc(.mod,xml))fnExtension(cc(file1.lst,file2.lst),cc(.xml))fnExtension(cc(file1.lst,file2.lst),cc(.xml,.cov))fnExtension("file.lst","")fnExtension("file.lst")Get provided arguments as a named list
Description
Get provided arguments as a named list
Usage
getArgs(call, env)Arguments
call | Function call as provided by |
env | Environment in which to evaluate the arguments. |
Value
A named list of arguments and their values
See Also
Other arguments:deprecatedArg()
Examples
afun <- function(){NMdata:::getArgs(sys.call(),parent.frame())}afun()Internal interpretation of file specification options
Description
Internal interpretation of file specification options
Usage
getFilePaths(files = NULL, file.pattern = NULL, dir = NULL, quiet)Arguments
files | character vector of full file paths. Specify eitherfiles or both file.pattern and dir. |
file.pattern | A regular expression to look for in dir. Ifused, dir must also be supplied. |
dir | The directory i which to look for file.pattern. dir ispassed to list.files as pattern. If supplied, file.patternmust also be supplied. |
quiet | The default is not to be quiet. |
Value
A character vector of full paths to files
read lines as needed
Description
Functions that take file and lines arguments can use this functionto derive lines no matter what was provided.
Usage
getLines( file, lines, linesep = "\n", simplify = TRUE, col.model, modelname, as.one)Arguments
file | A file path to a a text file to read. |
lines | Text lines if file was already read. |
as.one | If the 'file' argument is used and if 'as.one' isTRUE, the file(s) are read and put into a 'data.table' with amodel column and a 'text' column. Default is FALSE. Be carefulwith this, as it returns different formats whether the 'file'or the 'lines' argument is used. |
Convert inits elements to a parameter data.frame
Description
Convert inits elements to a parameter data.frame
Usage
initsToExt(elements)Arguments
elements | The elements object produced by 'NMreadInits()'. |
Details
initsToExt is misleading. It is not a reference to theinitstab, but actually the elements object returned byNMreadInits. The elements object is more detailed as itcontains information about where information is found incontrol stream lines. The 'ext' object is a parameter'data.frame', same format as returned by'NMdata::NMreadExt()'.
Check if an object is 'NMdata'
Description
Check if an object is 'NMdata'
Usage
is.NMdata(x)Arguments
x | Any object |
Value
logical if x is an 'NMdata' object
Row numbers of elements in a triangular representation of a symmetric matrix
Description
Row numbers of elements in a triangular representation of a symmetric matrix
Usage
itriag(blocksize, istart = 1, diag = "lower")Column numbers of elements in a triangular representation of a symmetric matrix
Description
Column numbers of elements in a triangular representation of a symmetric matrix
Usage
jtriag(blocksize, istart = 1, diag = "lower")Apply function of subsets of 'data.frame', return named list
Description
Based on columns in data, run function on subsets of data andreturn the results in a list, carrying names of the subsets. Say acolumn is 'model' and you want to create a plot, run a regressionor anything on the data for each model. In that caselapplydt(data,by="model",fun=function(x)lm(lAUC~lDose,data=x)). Thel in lapplydt is because a list is returned (like lapply), the dtis because the input is a data.table.
Usage
lapplydt(data, by, fun, drop.null = FALSE)Arguments
data | Data set to process. Must be a data.frame-like structure. |
by | Column to split data by. |
fun | function to pass to 'lapply()'. |
drop.null | If some subsets return NULL, drop the emptyelements in the returned list? |
Details
the name of the current dataset can be reached with the'.nm' variable. like
Value
a list
Examples
pk <- readRDS(file=system.file("examples/data/xgxr2.rds",package="NMdata"))lapplydt(pk,by="DOSE",fun=function(x) { message("this is subset",.nm) nrow(x) })List rows with missing values across multiple columns
Description
Missing can be NA and for character variables it can be certainstrings too. This function is experimental and design may changein future releases.
Usage
listMissings(data, cols, by, na.strings = c("", "."), quiet = FALSE, as.fun)Arguments
data | The data to look into. |
cols | The columns to look for missings in. |
by | If supplied, we are keeping track of the missings withinthe values of the by columns. In summary, by is included too. |
na.strings | Strings that should be interpreted asmissing. All spaces will be removed before we compare tona.strings. The default is c("",".") so say " . " is amissing by default. |
quiet | Keep quiet? Default is not to. |
as.fun | A function that will be run on the result beforereturning. If first input data set is a data.table, thedefault is to return a data.table, if not the default is toreturn a data.frame. Use whatever to get what fits in withyour workflow. Default can be configured with NMdataConf. |
Value
Invisibly, a data.frame including all findings
See Also
Other DataWrangling:compareCols(),dims()
Extract run time from output control stream
Description
Extract run time from output control stream
Usage
lstExtractTime(file, tz.lst = "as.is")Arguments
file | path to output control stream |
tz.lst | The time zone of the time stamp from Nonmem. Thedefault ("as.is") is to try to extract it or take it from thesystem on which this function is run. See details. |
Details
Time zones are system specific. SeeOlsonNames()for a list of what time zones are available on the system.
Value
A POSIXct date-time object
Examples
file <- system.file("examples/nonmem/xgxr003.lst",package="NMdata")NMdata:::lstExtractTime(file)file <- system.file("examples/nonmem/xgxr003.mod",package="NMdata")NMdata:::lstExtractTime(file)## Not run: all.lsts <- list.files( system.file("examples/nonmem",package="NMdata"), pattern="\\.lst",full.names=TRUE)lapply(all.lsts,NMdata:::lstExtractTime)## End(Not run)upper or lower triangle or all values of a matrix as long-format
Description
upper or lower triangle or all values of a matrix as long-format
Usage
mat2dt(x, triangle = "lower", as.fun)Arguments
x | A matrix |
triangle | Either '"lower"' (default) or '"upper"', or'"all"' for which triangle to return. '"lower"' and '"upper"'are equivalent for covariance or correlation matrices but thereturned indexes will differ. '"all"' will return the fullmatrix which mostly makes sense if matrix is not a covarianceor correlation matrix. |
as.fun | See '?NMdataConf' |
Details
The matrix is assumed ordered and the index numbers forrows and columns will be returned in 'i' and 'j' columns. Rownames and column names will be returned in columns'parameter.i' and 'parameter.j'.
Value
A 'data.frame'-like object with indexes 'i' and 'j' forposition and matrix element value in 'value' column.
See Also
dt2mat
Merge, order, and check resulting rows and columns.
Description
Stop checking that the number of rows is unchanged after a merge -'mergeCheck' checks what you really want - i.e. x is extended withcolumns from y while all rows in x are retained, and no new rowsare created (plus some more checks). 'mergeCheck' is not a merge implementation - it is auseful merge wrapper. The advantage over using much more flexiblemerge or join function lies in the fully automated checking thatthe results are consistent with the simple merge described above.
Usage
mergeCheck( x, y, by, by.x, by.y, common.cols = base::warning, ncols.expect, track.msg = FALSE, quiet, df1, df2, subset.x, fun.na.by = base::stop, as.fun, fun.commoncols, ...)Arguments
x | A data.frame with the number of rows must should beobtained from the merge. The resulting data.frame will beordered like x. |
y | A data.frame that will be merged onto x. |
by | The column(s) to merge by. Character string (vector). byor by.x and by.y must be supplied. |
by.x | If the columns to merge by in x and y are nameddifferently. by or by.x and by.y must be supplied. |
by.y | If the columns to merge by in x and y are nameddifferently. by or by.x and by.y must be supplied. |
common.cols | If common columns are found in x and y, andthey are not used in 'by', this will by default create columnsnamed like col.x and col.y in result (see ?merge). Often, thisis a mistake, and the default is to throw a warning if thishappens. If using 'mergeCheck' in programming, you may want tomake sure this is not happening and usecommon.cols=stop. If you want nothing to happen, you can docommon.cols=NULL. You can also use 'common.cols="drop.x"'to drop "non-by" columns in 'x' with identical column names in'y'. Use "drop.y" to drop them in 'y' and avoid theconflicts. The last option is to use 'common.cols="merge.by"'which means 'by' will automatically be extended to include allcommon column names. |
ncols.expect | If you want to include a check of the numberof columns being added to the dimensions of 'x'. So ifncols.expect=1, the resulting data must have exactly onecolumn more than 'x' - if not, an error will be returned. |
track.msg | If using 'mergeCheck' inside other functions, itcan be useful to use track.msg=TRUE. This will add informationto messages/warnings/errors that they came from 'mergeCheck()'. |
quiet | If FALSE, the names of the added columns arereported. Default value controlled by NMdataConf. |
df1 | Deprecated. Use x. |
df2 | Deprecated. Use y. |
subset.x | Not implemented. |
fun.na.by | If NA's are found in (matched) by columns in bothx and why, what should we do? This could be OK, but in manycases, it's because something unexpected is happening. Usefun.na.by=NULL if you don't want to be notified and want to goahead regardless. |
as.fun | The default is to return a data.table if x is adata.table and return a data.frame in all other cases. Pass afunction in as.fun to convert to something else. |
fun.commoncols | Deprecated. Please use 'common.cols'. |
... | additional arguments passed to data.table::merge. Ifall is among them, an error will be returned. |
Details
Besides merging and checking rows, 'mergeCheck' makes surethe order in x is retained in the resulting data (both rowsand column order). Also, a warning is given if column namesare overlapping, making merge create new column names likecol.x and col.y. Merges and other operations are done usingdata.table. If x is a data.frame (and not a data.table), itwill internally be converted to a data.table, and theresulting data.table will be converted back to a data.framebefore returning.
'mergeCheck' is for the kind of merges where we think of xas the data to be enriched with columns from y - rowsunchanged. This is even further limited than a left join whereyou can match rows multiple times. A common example of the useof 'mergeCheck' is for adding covariates to a pk/pd data set. Wedo not want that to remove or duplicate doses, observations,or simulation records. In those cases, 'mergeCheck' does allneeded checks, and you can run full speed without checkingdimensions (which is anyway not exactly the right thing to doin the general case) or worry that something might go wrong.
Checks performed:
x has >0 rows
by columns are present in x an y
Merge is not performed on NA values. If by=ID and both x$ID andy$ID contain NA's, an error is thrown (see argument fun.na.by).
Merge is done by all common column names in x and y. Awarning is thrown if there are column names that are not beingused to merge by. This will result in two columns named like BW.xand BW.y and is often unintended.
Before merging a row counter is added to x. After the merge, theresult is assured to have exactly one occurrence of each of thevalues of the row counter in x.
Moreover, row and column order from x is retained in the result.
Value
a data.frame resulting from merging x and y. Class asdefined by as.fun.
See Also
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),addTAPD(),findCovs(),findVars(),flagsAssign(),flagsCount(),tmpcol()
Examples
df1 <- data.frame(x = 1:10, y=letters[1:10], stringsAsFactors=FALSE) df2 <- data.frame(y=letters[1:11], x2 = 1:11, stringsAsFactors=FALSE) mc1 <- mergeCheck(x=df1,y=df2,by="y")## Notice as opposed to most merge/join algorithms, `mergeCheck` by#default retains both row and column order from xlibrary(data.table)merge(as.data.table(df1),as.data.table(df2))## Here we get a duplicate of a df1 row in the result. If we only## check dimensions, we make a mistake. `mergeCheck` captures the## error - and tell us where to find the problem (ID 31 and 180):## Not run: pk <- readRDS(file=system.file("examples/data/xgxr2.rds",package="NMdata"))dt.cov <- pk[,.(ID=unique(ID))]dt.cov[,COV:=sample(1:5,size=.N,replace=TRUE)]dt.cov <- dt.cov[c(1,1:(.N-1))]res.merge <- merge(pk,dt.cov,by="ID")dims(pk,dt.cov,res.merge)mergeCheck(pk,dt.cov,by="ID")## End(Not run)Pretty wrapping of lines in NMdata vignettes
Description
Pretty wrapping of lines in NMdata vignettes
Usage
messageWrap( ..., fun.msg = message, prefix = "\n", initial = "", width, track.msg = FALSE)Arguments
... | parameters to pass to strwrap |
fun.msg | The function to pass the text through. Typically,message, warning, or stop. If NULL, nothing will happen, andNULL is invisibly returned. |
prefix | Passed to strwrap. Default is "\n". |
initial | Passed to strwrap. Default is an empty string. |
width | Passed to strwrap. Default is 80. |
track.msg | If TRUE, the name of the function throwing themessage/warning/error is mentioned. This is not default butuseful when using function inside other functions. |
Value
Nothing.
print a data.table
Description
print a data.table
Usage
message_dt(x, ...)Arguments
x | a data.table or something to be converted to adata.table. |
... | passed to print.data.table. |
Details
defaults arguments to print.data.table (in addition to'x=dt' which cannot be overwritten) are 'class=FALSE','print.keys=FALSE', 'row.names=FALSE'.
print method for NMdata summaries
Description
print method for NMdata summaries
Usage
## S3 method for class 'summary_NMdata'print(x, ...)Arguments
x | The summary object to be printed. See ?summary.NMdata |
... | Arguments passed to other print methods. |
Value
NULL (invisibly)
Read as class NMctl
Description
Read as class NMctl
Usage
readCtl(x, ...)Arguments
x | object to read. |
... | Not used. |
Value
An object of class 'NMctl'.
reduce tables from NMscanTables to fewer objects
Description
reduce tables from NMscanTables to fewer objects
Usage
reduceTables(tables, col.nmout)Arguments
tables | Object from NMscanTables |
col.nmout | The name of the column holding logical of whetherrow was found in output tables. |
Details
This function is under development. It is not polishedfor other than internal use by NMscanData. It will likelyreturn a different format in the future.
Value
A list of data.tables: One row-level, one id-level dataset and their column specifications.
Rename columns matching properties of data contents
Description
For instance, lowercase all columns that Nonmemcannot interpret (as numeric).
Usage
renameByContents(data, fun.test, fun.rename, invert.test = FALSE, as.fun)Arguments
data | data.frame in which to rename columns |
fun.test | Function that returns TRUE for columns to berenamed. |
fun.rename | Function that takes the existing column name andreturns the new one. |
invert.test | Rename those where FALSE is returned fromfun.test. |
as.fun | The default is to return data as a data.frame. Passa function (say tibble::as_tibble) in as.fun to convert tosomething else. If data.tables are wanted, useas.fun="data.table". The default can be configured usingNMdataConf. |
Value
data with (some) new column names. Class as defined byas.fun.
Examples
pk <- readRDS(file=system.file("examples/data/xgxr2.rds",package="NMdata"))pk[,trtact:=NULL]pk <- renameByContents(data=pk, fun.test = NMisNumeric, fun.rename = tolower, invert.test = TRUE)## Or append a "C" to the same column namespk <- readRDS(file=system.file("examples/data/xgxr2.rds",package="NMdata"))pk[,trtact:=NULL]pk <- renameByContents(data=pk, fun.test = NMisNumeric, fun.rename = function(x)paste0(x,"C"), invert.test = TRUE)Check row identifier in a model for necessary properties.
Description
This function is only meant for internal use by NMscanData.
Usage
searchColRow( file, file.mod = file.mod, dir.data, file.data, translate.input, formats.read, args.fread, col.id, tab.row)Arguments
file | a .lst (output) or a .mod (input) control streamfile. The filename does not need to end in .lst. It isrecommended to use the output control stream because itreflects the model as it was run rather than how it is plannedfor next run. However, see file.mod and dir.data. |
file.mod | The input control stream file path. Default is tolook for \"file\" with extension changed to .mod (PSNstyle). You can also supply the path to the file, or you canprovide a function that translates the output file path to theinput file path. If dir.data is missing, the input controlstream is needed. This is because the .lst does not containthe path to the data file. The .mod file is only used forfinding the data file. How to interpret the datafile is readfrom the .lst file. The default can be configured usingNMdataConf. See dir.data too. |
dir.data | The data directory can only be read from thecontrol stream (.mod) and not from the output file (.lst). Soif you only have the output file, use dir.data to tell inwhich directory to find the data file. If dir.data isprovided, the .mod file is not used at all. |
file.data | Specification of the data file path. When this isused, the control streams are not used at all. |
translate.input | If TRUE (default), data columns are namedas interpreted by Nonmem (in $INPUT). If data file containsmore columns than mentioned in $INPUT, these will be named asin data file (if data file contains named variables). |
args.fread | List of arguments passed to fread. Notice thatexcept for "input" and "file", you need to supply allarguments to fread if you use this argument. Default valuescan be configured using NMdataConf. |
col.id | The name of the subject ID column. Optional and onlyused to calculate number of subjects in data. Default ismodified by NMdataConf. |
tab.row | row-level data |
Value
A character message about the findings if any
splitFields splits the fields format string into the splittersand the variable names. NB, this is interpreting the user-providedfields, it is not looking at control stream text,
Description
splitFields splits the fields format string into the splittersand the variable names. NB, this is interpreting the user-providedfields, it is not looking at control stream text,
Usage
splitFields(format, spaces.split = FALSE)summary method for NMdata objects
Description
summary method for NMdata objects
Usage
## S3 method for class 'NMdata'summary(object, ...)Arguments
object | An NMdata object (from NMscanData). |
... | Only passed to the summary generic if object is missing NMdatameta data (this should not happen anyway). |
Details
The subjects are counted conditioned on the nmout column. If onlyid-level output tables are present, there are no nmout=TRUE rows. Thismeans that in this case it will report that no IDs are found inoutput. The correct statement is that records are found for zerosubjects in output tables.
Value
A list with summary information on the NMdata object.
generate a name for a new data column that is not already in use.
Description
generate a name for a new data column that is not already in use.
Usage
tmpcol( data, names = NULL, base = "tmpcol", i1 = 1, sep = "", max.it = 100, prefer.plain = TRUE)Arguments
data | The dataset to find a new element name for |
names | Character vector of names that must not bematched. Only one of data and names can be supplied. |
base | The base name of the new element. A number willappended to this string that will ensure that the new elementname is not already in use. |
i1 | Where to start the search for a smallest available indexnumber to add to 'base' if necessary. |
sep | Delimiter to use when appending integers. Default is none. |
max.it | Maximum number of iterations on element name. |
prefer.plain | If base isn't in use already, use it without adigit appended? |
Value
A character string
See Also
make.names
Other DataCreate:NMorderColumns(),NMstamp(),NMwriteData(),addTAPD(),findCovs(),findVars(),flagsAssign(),flagsCount(),mergeCheck()
Calculate number of elements for matrix specification
Description
calculate number of elements in the diagonal and lower triangle ofa squared matrix, based on the length of the diagonal.
Usage
triagSize(diagSize)Arguments
diagSize | The length of the diagonal. Same as number of rowsor columns. |
Value
An integer
Examples
triagSize(1:5)Catch errors, warnings, messages
Description
Catch errors, warnings, messages
Usage
tryCatchAll(expr, message = FALSE, warning = TRUE, error = TRUE)Arguments
expr | The expression to catch errors, warnings, messages from |
message | Catch messages? Default is 'FALSE'. |
warning | Catch warnings? Default is 'TRUE'. |
error | Catch errors? Default is 'TRUE'. |
Examples
testfun <- function(x) { message("Starting function") if (x == 0) warning("x is zero") if (x < 0) stop("x is negative") message("Ending function") x}## use the class of the result to test whether anything was## caught.res1 <- NMdata:::tryCatchAll(testfun(1)) res1inherits(res1,"tryCatchAll")res1b <- NMdata:::tryCatchAll(testfun(1),message=FALSE) res1binherits(res1b,"tryCatchAll")res2 <- NMdata:::tryCatchAll(testfun(0)) res2inherits(res2,"tryCatchAll")res3 <- NMdata:::tryCatchAll(testfun(-1)) res3inherits(res3,"tryCatchAll")Remove NMdata class and discard NMdata meta data
Description
Remove NMdata class and discard NMdata meta data
Usage
unNMdata(x)Arguments
x | An 'NMdata' object. |
Value
x stripped from the 'NMdata' class
Extract unique non-missing value from vector
Description
Extract unique non-missing value from vector
Usage
uniquePresent(x, req.n1 = TRUE, na.pattern)Arguments
x | A vector, either numeric or character. |
req.n1 | Require one unique value? If 'TRUE' (default), anerror is thrown if non-unique values found. If 'FALSE', allthe unique values are returned. |
na.pattern | In addition to NA-elements, what text stringsshould be considered missing? Default is empty strings andstrings only containing white spaces ('na.pattern="^ *$"'). |
Details
This function is particularly useful when combining datasets of which only some contain certainvariables.uniquePresent with 'req.n1=TRUE' makes sure theresult is a single unique value (e.g., within subjects). Atypical use is carrying subject-level covariates from one dataset to another in a longitudinal analysis.
Value
a vector of same class as 'x'
Do the actual writing of meta data
Description
Do the actual writing of meta data
Usage
writeNMinfo(data, meta, append = FALSE, byRef = TRUE)Arguments
data | A data set |
meta | The meta data to attach |
append | If FALSE, the existing meta data will be removed. IfTRUE, metadata will be appended to existing metadata. However,this will not work recursively. |
byRef | Should always be TRUE. |
Value
The data with meta data attached