## ## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats': ## ## filter, lag
## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union
## Welcome to compositions, a package for compositional data analysis. ## Find an intro with "? compositions"
## ## Attaching package: 'compositions'
## The following objects are masked from 'package:stats': ## ## anova, cor, cov, dist, var
## The following object is masked from 'package:graphics': ## ## segments
## The following objects are masked from 'package:base': ## ## %*%, norm, scale, scale.default
Prior to starting make sure that:
Fatty acid names in all files are the same (contain the exact same numbers/characters and punctuation)
There are no fatty acids in the prey file that do not appear in the predator file and visa versa
This is the list of FAs to be used in the modelling.
The simplest alternative is to load a .csv file that contains a single column with a header row with the names of thee fatty acids listed below (see example file ).
A more complicated alternative is to load a .csv file with the full set of FAs and then add code to subset the FAs you wish to use from that set -> this alternative is useful if you are planning to test multiple sets.
Regardless of how you load the FA set, it must be converted to a vector.
The FA signatures in the originating .csv file should be proportions adding to 1.
Each predator signature is a row with the FAs in columns (see example file ). The FA signatures are subsetted for the chosen FA set (created above) and renormalized DURING the modelling so there is and/or renormalize prior to loading the .csv file or running p.MUFASA BUT make sure that the same FAs appear in the predator and prey files.
Your predator FA .csv file can contain as much tombstone data columns as you like, you must extract the predator FA signatures as separate input in order to run in p.MUFASA. For example: in the code below, the predator .csv file (“predatorFAs.csv”) has 4 tombstone columns (SampleCode, AnimalCode, SampleGroup, Biopsy). Prior to running MUFASA, the tombstone (columns 1-4) and FA data (columns 5 onward) are each extracted from the original data frame. The FA data becomes the predator.matrix (which is passed to p.MUFASA) and the tombstone data is retained so that it can be recombined with the model output later on.
The FA signatures in the .csv file should be proportions that sum to 1.
The prey file should contain all of the individual FA signatures of the prey and their lipid contents (where appropriate).
If you want to only include a subset of prey species, you must extract it prior to input (see code below).
Mean lipid content by species group is calculated from the full prey file using the species group as a summary variable (see code below).
Note: If no lipid content correction is going to be applied, then a vector of 1s of length equal to the number of species groups is used as the vector instead. I.e. FC - rep(1,nrow(prey.matrix)).
If you’ve decided on a subset of species, you must extract them from the mean lipid content vector as well.
Calibration .csv file should contain 2 columns (with headers). The first contains the FA names, the second the value of the CC for each FA (see example file “CC.csv”).
Important: The FAs in the CC.csv file must be exactly the same as the FAs in the original predator.csv file and they must be in the EXACT SAME ORDER.
The MUFASA output is a list with 3 components:
This is a matrix of the diet estimate for each predator (by rows, in the same order as the input file) by the species groups (by column, in the same order as the prey.red file). The estimates are expressed as a proportion (they will sum to 1).
This is a vector of the negative log likelihood values at each iteration of the optimizer.