W

PK/PD Modeling / Pharmacometrics Lead

Weekday AI
Full-time
On-site
Remote

This role is for one of our clients

Compensation: $150-$200per hour

We are looking for an expert in quantitative pharmacology who can strengthen decision-making around dose selection, exposure–response interpretation, and model plausibility. This individual will complement translational and clinical pharmacology teams by enforcing quantitative rigor, identifying implausible model behavior, and defining evaluation standards that distinguish credible outputs from high-risk extrapolations.

Requirements

Who We’re Looking For

  • Extensive hands-on experience in PK, PD, exposure–response modeling, and ideally population PK or QSP.
  • Strong capability in model fitting, sensitivity analysis, and detecting non-plausible parameter regions.
  • Able to assess the validity of dose–exposure predictions and identify unsafe or unjustified extrapolations.
  • Skilled at creating model evaluation rubrics that separate acceptable outputs from non-credible ones.
  • Able to communicate how quantitative criteria should complement narrative or qualitative decision logic.

Nice-to-have:

  • Experience supporting translational or clinical pharmacology leads in dose justification.
  • Familiarity with integrating nonclinical PK/PD data (e.g., two-species GLP → human FIH projections).

Experience Level

  • 8–12 years in quantitative pharmacology, pharmacometrics, PK/PD modeling, or related roles within pharma, CROs, or modeling consultancies.
  • Strong track record in population PK/PD, exposure–response modeling, and parameter estimation using NONMEM, Monolix, or similar tools.
  • Demonstrated ability to interpret model outputs for decision-making, not just data fitting.
  • Able to build fit-for-purpose models and critique model structures, assumptions, and uncertainty drivers.

Key Expectations

  • Design and refine micro-evaluations for PK/PD model performance (curve fits, parameter plausibility checks, error classification).
  • Implement quantitative sanity checks and embed them into structured evaluation rubrics.
  • Define failure conditions such as unsafe extrapolations, inadequate curve coverage, or invalid assumptions.

Inputs Provided

  • PK/PD datasets, toxicology summaries, and specific modeling prompts (e.g., “fit exposure–response curve and assess safety margin”).
  • Sample model outputs generated by automated systems.

Expected Outputs

1. Quantitative Rubrics
Clear thresholds and evaluation criteria for parameter plausibility, curve quality, and overall model integrity.

2. Golden Fit Examples
Representative “ideal” PK/PD fits and visualizations to guide calibration and benchmark model performance.

3. Error Taxonomy
Structured classification of common modeling and fitting errors, with root-cause explanations.

4. Meta-Layer Commentary
Concise insights explaining how expert modelers recognize implausible, unsafe, or structurally flawed model outputs beyond surface-level error metrics.

Engagement Model

  • Contract / Part-time / Remote
  • Outcome-oriented deliverables with flexible collaboration.