John Dynes · EdD

Insights at the
edge of AI and learning

Frameworks, tools, and thinking for professionals navigating the intersection of artificial intelligence, education, and organisational practice.

Real human-AI collaboration isn't a feature. It's a discipline. It demands that humans develop the mindset to engage with AI meaningfully, and that AI develops how it responds to the individual human. We are nowhere near either yet.

Explore the work Get in touch
Scroll

30+
Years in practice
6
Qualifications held
3
Sectors served

About

Practitioner, educator, strategist

With a combination of experiences gained from the manufacturing industry, the education sector, and the Army Reserve, I bring a unique perspective into the training environment. I am currently Head of Insights for a training organisation operating in the Defence sector, where I focus on educational insight, pedagogical development, and the application of Generative AI.

My work sits at the intersection of how people learn, how organisations develop capability, and how AI — used well — can extend the reach and quality of both, through translating emerging capability into grounded, responsible practice.

My background spans frontline adult education, senior leadership, qualification and assessment design, coaching, and mediation. Together, this gives me a wide frame of reference for the problems that sit underneath most L&D and AI challenges.

🎓
Doctorate in Education (EdD) Advanced research in learning and professional practice
🤝
Level 7 Executive Coach Senior leadership and professional coaching
⚖️
Qualified Mediator Conflict resolution and facilitated dialogue
🧭
Total Strengths Facilitator SDI-based team and leadership development
💡
Strategic and Innovative Thinker Translating emerging ideas into grounded, practical application

Frameworks & Tools

Practical frameworks developed through applied research and professional practice — designed for real use in high-stakes contexts.

Evaluation Framework
Output Evaluator — Rubric

A seven-dimension rubric for evaluating AI-generated outputs in professional contexts. Operates in Guide Mode and Checker Mode, with a calibrated scoring scale and structured evaluation format.

Evaluation Tool
Output Evaluator — Self Assessment

Score an AI-generated output yourself using the seven-dimension rubric. Works through each dimension with guided scoring, captures your notes, and produces a structured PDF evaluation record.

Engagement Framework
Three-Prompt Framework

A structured approach to AI engagement training: Collaborator, Interrogator, Evaluator. Applies Bloom's, Perry's, and SOLO taxonomies to the quality of human–AI interaction.

Coming soon
Engagement Framework
Personal Engagement — Meta

A four-dimension framework for evaluating the quality of a person's AI engagement practice — how they plan, work, reflect, and think critically. The practitioner-facing companion to the Output Evaluator.

Coming soon
Diagnostic Tool
AI Tool Selector

A structured diagnostic for identifying the right AI tools for specific professional tasks — cutting through the noise to match capability to need.

Coming soon
Evaluative Framework
AI Strategic Use Evaluator

A framework for evaluating the strategic appropriateness of AI use in professional contexts — assessing whether, when, and how AI should be applied to a given challenge.

Coming soon

Writing & Ideas

Long-form thinking on AI, learning, and professional practice — a permanent home for ideas that go beyond the LinkedIn feed.

Thought Leadership
Articles & Position Pieces

In-depth articles, blogs, and position pieces on AI, learning, and professional practice — grounded in evidence, shaped by experience.

Coming soon
Current Work
Active Projects

A live view of current Claude projects and active workstreams — tools in development, frameworks being tested, and ideas being explored.

Coming soon

Contact

Get in touch

For professional enquiries, collaboration, and to discuss any of the frameworks, tools, or ideas on this site.