Research software is increasingly vital for inquiry and learning within Higher Degree Research, yet solo researchers often lack specific guidance when developing such tools alongside emerging artificial intelligence technologies. Ka Ching Chan from the University of Southern Queensland, and colleagues, address this gap by proposing the SHAPR framework, a Solo, Human-centred, AI-assisted PRactice framework designed to operationalise Action Design Research principles for individual researchers. This work is significant because it moves beyond high-level methodological guidance to offer actionable steps for daily practice, explicitly detailing roles, artefacts, and reflective practices needed to maintain accountability and learning during AI-assisted software development. The authors evaluate SHAPR through reflective analysis, assessing its coherence, alignment with established research methods, and suitability for solo practice, ultimately contributing to discussions on supporting both knowledge creation and training for HDR researchers.
Generative artificial intelligence is reshaping development practice, offering powerful forms of assistance while introducing new challenges for accountability, reflection, and methodological rigour. Building software enables researchers to externalise abstract ideas, test assumptions through implementation, and iteratively refine both the artefact and their conceptual understanding, aligning with learning-by-building and constructionist views of knowledge development (Kolb, 1984; Papert, 1980).
Research software serves a dual role as both a means of inquiry and an outcome of the research process (Kennedy-Clark, 2013). A defining characteristic of research software development in HDR is that it is most often undertaken by a single researcher, assuming multiple roles including domain expert, system designer, developer, evaluator, and reflective analyst.
This solo mode affords flexibility and integration between domain knowledge and technical design, but also imposes significant cognitive and methodological demands. Existing work often emphasises team-based practices and sustainability, while pedagogical studies frequently address project-based learning without explicitly considering software artefacts as instruments of research methodology.
Consequently, there is limited guidance on how solo researchers can structure software development to systematically support learning, reflection, and knowledge generation, a gap that becomes increasingly consequential as research artefacts grow in scope and complexity. Contemporary research systems are often software-based and full-stack, encompassing user interfaces, application logic, data persistence, and deployment environments, reinforcing the need for structured approaches integrating technical work with reflective research practice (Schön, 1983).
Research artefacts may consist of algorithmic implementations, analytical pipelines, simulation models, or formal analytical constructs such as statistical or structural equation models, with inquiry advanced through iterative construction, evaluation, and refinement rather than complete prior specification. Recognising research artefacts as knowledge-generating instruments motivates the need for frameworks supporting solo researchers in aligning development practice with rigorous research methodologies.
Recent advances in generative AI have begun to reshape research software conception and development, particularly in solo contexts. Large language models (LLMs) and AI-assisted development tools are increasingly capable of generating source code, drafting requirements, suggesting architectural patterns, and summarising technical artefacts (Chow and Ng, 2025).
For solo researchers, these tools offer the potential to accelerate implementation and reduce technical barriers (Amershi et al0.2019; Shneiderman, 2020), exemplified by the growing practice of ‘vibe coding’, relying on high-level prompts, iterative feedback, and rapid experimentation rather than detailed upfront specifications (Fawzy et al0.2025; Ge et al0.2025). This allows solo researchers to translate conceptual ideas into working prototypes within short timeframes, and when combined with integrated development environments and version control, can support rapid exploratory cycles aligning with the iterative nature of research inquiry.
However, increased speed and abstraction also introduce challenges for maintaining methodological rigour and transparency. AI-assisted development alters the cognitive distribution of work between human and machine, raising questions about authorship, accountability, and understanding. In research contexts, the researcher remains responsible for the correctness of the software and the validity of the knowledge claims derived from it.
Uncritical reliance on AI-generated outputs risks obscuring design rationales, masking errors, and weakening traceability between research questions, design decisions, and outcomes. From a learning perspective, AI tools may function as cognitive scaffolds, but may also reduce opportunities for deep learning if outputs are accepted without reflection, aligning with broader concerns in human-AI collaboration regarding the balance between automation and meaningful human engagement.
Despite rapid adoption, existing research methodologies provide limited guidance on systematically incorporating AI-assisted tools into research workflows, leaving a lack of structured approaches helping solo researchers integrate AI assistance while preserving human accountability, reflective practice, and methodological coherence. ADR has emerged as a well-established methodology for studying and constructing socio-technical artefacts, particularly in information systems and educational technology research.
By integrating artefact construction with intervention, evaluation, and reflection, ADR provides a structured approach for generating design knowledge while addressing real-world problems. Its emphasis on iterative Building, Intervention, Evaluation (BIE) cycles makes ADR well suited to research settings where learning unfolds through the development and refinement of complex artefacts, including research software systems, legitimising software artefacts as vehicles for inquiry and foregrounding reflection and abstraction as mechanisms for generating transferable design knowledge (Mullarkey and Hevner, 2019).
However, ADR offers limited guidance on how its principles should be operationalised in day-to-day development practice, particularly in solo research contexts. The original formulation of ADR assumed organisational settings with multiple stakeholders, role separation, and collective sense-making processes, assumptions that do not readily translate to solo research software development.
This limitation is amplified when generative AI tools are introduced, as ADR specifies activities but does not articulate how Human-AI collaboration should be structured within those activities, nor how accountability and understanding should be preserved when development tasks are partially delegated to AI systems. Without explicit guidance, solo researchers may adopt practices that obscure decision-making, weaken traceability, and blur the boundary between human judgement and machine-generated output, with important implications for education as AI-assisted development risks prioritising rapid production over reflective learning and methodological transparency.
While ADR establishes methodological intent, complementary guidance is needed to ensure this intent is realised in practice when development is undertaken by a single researcher working with AI assistance (Kennedy-Clark, 2013). The SHAPR framework addresses a critical gap in research methodologies by explicitly linking research software development, human-AI collaboration, and reflective learning.
This work conceptualises solo research software development as a distinct context, acknowledging the unique challenges of role multiplexing and AI assistance. By making these elements explicit, SHAPR aims to sustain human accountability and learning during AI-assisted development. Conceptual role separation is a key feature, allowing the researcher to delineate tasks and maintain oversight even when utilising generative AI systems.
SHAPR’s artefact-centred governance further clarifies the relationship between the software being developed and the methodological scaffolding supporting it, treating SHAPR itself as the primary design artefact to allow for formative evaluation focused on internal coherence and applicability to solo research practice. This approach prioritises methodological transparency and traceability, mitigating risks associated with obscured design rationales and weakened connections between research questions, decisions, and outcomes, emphasising the importance of preserving human judgement and understanding within AI-mediated development processes.
Researchers increasingly rely on bespoke software to conduct their work, yet the rise of generative artificial intelligence presents a paradox for doctoral researchers. These tools promise to accelerate development, but simultaneously threaten the rigorous, reflective practice that underpins credible scholarship. However, its conceptual nature, evaluated through internal coherence rather than large-scale trials, is a limitation. Future work must explore how this framework translates into practical tools and workflows for researchers. More broadly, the challenge remains of fostering a culture where methodological rigour isn’t seen as a bureaucratic burden, but as an integral part of the creative process, especially when leveraging rapidly evolving AI technologies.
👉 More information
🗞 SHAPR: A Solo Human-Centred and AI-Assisted Practice Framework for Research Software Development
🧠 ArXiv: https://arxiv.org/abs/2602.12443
