REMS

Authors: Clare Tagg (Tagg Oram Partnership). Co-authors at time of this report were Somia Nasim and Peter Goff (The Qualifications & Curriculum Agency)

Setting up the Project

What is REMS? 

So, where did it all start? In January 2008 the Qualifications & Curriculum Agency (QCA) employed a team, who were briefed to:

  • Develop an evidence database, to store and manage the huge quantities of evidence generated on educational reform: 14-19 Reform, Qualifications & Credit Framework, Adult Skills and Lifelong Learning & Skills programmes;
  • Use data analysis software such as NVivo, to provide a system which enables exploratory and thematic interrogation and querying of the evidence.

In order to make informed and evidence based decisions and policy, QCA ultimately wanted to:

  • Manage and use evidence more effectively;
  • Meet corporate knowledge transfer requirements.

This is a concept that the Research & Evaluation Team had been juggling with for some time. They trialled the idea with the Curriculum 2000 programme; using N6 to record, code and analyse the huge consultation responses on the Curriculum 2000. The success of that project resulted in the birth of the REMS (Research Evidence Management System) project – similar but with a more demanding and ambitious remit.

Now that the remit was established, we needed to devise a specification for the project, to agree the parameters of the work, project plan, capacity and resources. Go to Tagg Oram Partnership for more detail.

That is Challenging!

That is challenging! Is one way of describing it. In the initial exploratory phase of the development, two things became apparent.  Firstly, nobody really knew what the mechanisms of the system should be, just what it should do. Secondly, most colleagues outside of the Research & Evaluation Team felt the project was impossible and unrealistic. So, the first decision made was to view the project like building a house, where the foundations need to be in place before the structure and building can begin. Hence small and planned steps were needed and therefore a comprehensive project plan was put in place.

Planning the Project

Devising the project plan included thinking strategically about how the system and evidence would be used in the short and longer term. An overall project plan was put together which incorporated the following sub-strategies:

  • Data collection and sorting
  • Piloting REMS
  • Stakeholder and communications
  • Analysis and reporting
  • Maintenance and development

These strategies were needed in order to make the REMS work.

Putting together the REMS Team

The first problem identified by QCA in making this happen was that past trials of developing something similar, but on a small scale fizzled out as no one directed and managed the process. Therefore, QCA decided to recruit a Research Data Manager, Somia Nasim. To further support the design work an experienced researcher in the education and skills field was also assigned to the project, Peter Goff. At this point the research and expertise was in place, the gap left was to recruit an NVivo expert and Clare Tagg was brought in to assist with this side of things.

Once the pilot was completed, two full time coders were recruited on a temporary contract.  As the project proceeded, it became apparent that it would be useful to have extra help with analysis and the team is currently working with a pool of contractors with different areas of expertise, who are able to undertake both coding and analysis.

Why NVivo7?

The REMS team explored the qualitative data analysis market. The Research & Evaluation Team had some experience of using QSR software (NVivo7 and N6 - see www.qsrinternational.com), and had successfully used it in a number of research surveys, consultations and policy development projects. Principally these were one-off, short-term projects to analyse and organise data for ease of reference and assist with the reporting. A benefits and weakness analysis was completed by the team, which recommended the use of NVivo7. This choice was further explored with Clare Tagg who felt the match between the project remit and NVivo features was good and workable and undertook performance tests to ensure that NVivo7 would be able to handle the large volumes of data expected for REMS.

Once the project was underway QSR released an updated version, NVivo8 with a number of enhanced features. These were added to the project and so after a successful trial of the system in NVivo8, the team converted the project, in August 2008.

Convincing the Team

One immediate action from the specification was to establish a steering group to guide, manage risks and issues, assist and direct the design of REMS, review the performance and continuously evaluate the progress and its functionality.

A key design and development element of REMS is the Communication Strategy. From the project launch the REMS team were aware that the key to making this project successful, was to get buy-in from internal and external stakeholders. Therefore, a comprehensive strategy was put in place which included:

  • Developing a proposal of the project and sharing it with Research & Evaluation Team colleagues;
  • Regular progress updates at communications events;
  • Presentations to the Executive and senior management teams;
  • A two phase REMS road show of presentations to internal colleagues:
  1. What is REMS?
  2. How does REMS work?
  3. Selling REMS as a service for colleagues, where all the management and maintenance is looked after by the REMS team. In the past, QCA have had negative experiences of using databases and using new software.

 

The data

The key to reaching the ambitious and challenging aims of REMS is – data; without collecting and inputting the data into NVivo, the system is non existent (as illustrated in the diagram below). 

The process

Data Sources

It was very apparent to the team from the outset that they needed to make a decision on the types of data that should be incorporated into REMS. We needed to decide whether the system should incorporate primary or secondary data. Both were explored in detail and a risk and SWOT analysis was conducted. The recommendation was only to incorporate secondary evidence into the system, with the view to exploring the potential of having primary data once the system was established. The key reasons for doing this were: 

  • Manageability – a quick scan of the information held and in production, as well as an estimate of what will be produced until 2013, raised concerns about how the vast amount of primary and secondary data will be managed;
  • Coding framework and classification – if both types of data went into the system, it would mean the coding framework and classifications applied would become more complex;
  • Data Protection Issues – REMS is a system that is intended to be utilised by internal and external colleagues, and as a result it would be very difficult to meet the Data Protection Act and the Research Code of Practice requirements, with primary data;   
  • Data being misinterpreted - responses data can very easily be misconstrued especially where conclusions are drawn based on a few responses. This would be very difficult to manage and control.

We were still left with a large amount of evidence, as we wanted to incorporate in to the system strategic objectives, policy documents, monitoring, research, evaluation reports and commentary and media articles, from both internal and external stakeholders. 

Date Collection and Input

To monitor and manage the data collected from stakeholders and input it into the system a thorough data collection strategy was put in place, this included:

  • Collecting the backlog of evidence from all colleagues, partners and stakeholders.
  • Using established databases to draw out the relevant evidence reports.
  • Using Google news reader and AMI Software to monitor current and establishing themes and newly published work from RSS feeds.
  • Searching stakeholder websites to collect published evidence reports.
  • Reviewing education and skills journals for relevant research work.
  • Making regular email requests to colleagues for relevant evidence reports.

At this stage the team were faced with some key challenges, these were:

  • Dealing with confidential sources – in order to have the most comprehensive system, at times, in particular in the development and exploratory stages of the project, some work may need to be kept confidential for a certain period of time. Some colleagues were concerned with how this will be managed and what measures were in place. As a result, sources were given confidential tags and the word 'confidential' was also written in the source's title. This allows the team to remove these sources easily if necessary.
  • Aim of keeping the sources in their original format – one of the main reasons the team moved from N6 to NVivo7 and then NVivo8, was because the software imports Word documents into NVivo, with virtually all of the formatting in place, with in most cases only minor amendments. One challenge has been highly formatted PDFs do not always convert well by NVivo8 and can cause performance degradation. The team is still working on the best strategy for handling long (50-150 page) PDFs.
  • Source reliability – As more and more sources were put into the system, it became apparent that some form of source rating was required to make it easy for colleagues to establish the degree to which the source has been subject to the control of research methods. We applied a plus rating system where the higher rating shows a higher degree of research rigour.

A year into the project REMS contains about 300 sources ranging from short media reports of 1 or 2 pages, long research reports of 100 or more pages to policy documents of 150 pages.  Many of these sources contain a variety of tables and graphs.

Working with the data

In the first six months of REMS most of the work on the NVivo project involved collecting, importing and coding the data. During this period a naming convention was designed for source documents, a classification system was developed using attributes and a coding framework was developed and refined. During this pilot phase the protocols for merging the project were developed; these were necessary because NVivo only allows one simultaneous user and it was clear from the outset that several people would need to work on coding at once.

Data sorting

The source naming convention in NVivo is extremely important as it determines how the software organises the evidence. The team knew that this was going to be important and spent some time designing the naming convention. Both the date of the source and the reform strand it relates to were important. Storing sources by strand was considered with a sub-folder for each strand. However as sources were applicable to multiple strands and chronological ordering of evidence was vital, after a detailed discussion it was decided that date was key as this would allow us to build up the story over time.

The strand element was not lost as acronyms for the strands were added into the naming convention. The naming convention employed in REMS is - Year-month-strand-author-title-plus rating. This allows the team to search for sources by strand or author but evidence is always displayed chronologically.

Classification

The naming convention is one way to sort the evidence reports, but in order for the database to function in a more sophisticated manner, further information about each source was required. In NVivo this is achieved by designing attributes and applying attribute values to each source case. This was probably one of the easiest areas to develop.

Each attribute was designed to be simple and easy to apply but with analysis in mind (eg to allow sources published in 2009 about implementation issues to be selected). The main problem we encountered was that, for each attribute you are only able to select one value. For example, we needed to know which reform strand each source relates to, and we knew that sometimes a source may relate to more than one strand. So a separate attribute was added for each of the strands with yes/no values. A year into the project, each document is classified according to 23 attributes .

Coding

All databases apply tags or classifications, but the REMS system uses qualitative research techniques to apply coding to the content of each evidence source. Before we were able to do this we needed to develop a coding framework. This aspect of the system was probably the most difficult to design as the normal research approaches were inapplicable. REMS does not have research objectives so it was not possible to use these as a starting point. It was also not possible to use a grounded approach as the sources were very varied and contained much that was interesting but not particularly relevant. Instead we used the 14-19 Reform policy objectives as a guide, together with QCA's objectives and remit in relation to these. We knew the coding framework needed to be flexible enough to deal with short, medium and longer-term issues, comprehensive enough to deal with a range of queries and interrogation, and simple enough to make coding easy for multiple coders.

As you can imagine it took several iterations before we finalised the framework. The framework consists of two levels of themes (or nodes as they are known in NVivo) and each theme is fairly high level. For each theme a title and definition was created in NVivo, so we all knew what should be coded to each theme and to maintain consistency in the coding. Once a draft coding framework was formulated, tests were conducted to clarify how the coding would work. When we felt that the framework was ready, we piloted it using a range of different types and sizes of source. Although, the framework is not static and we can add and remove nodes, as a team we decided for consistency this would only be done on a quarterly basis.

Coding policy

The diagram below illustrates the coding policy and framework:

Coding framework

The strategy adopted was to multi-code each piece of evidence to between 4-6 nodes designed as follows:

  • A set of nodes was developed as an initial portal based on 'qualifications'.
  • A similar set of nodes was developed for each 'Curriculum Aim' and 'Stakeholder'.
  • A strategy was adopted of coding all evidence initially to these top 3 areas.
  • A series of nodes covered Delivery, Assessment, Pedagogy, Accountability and Reform process; at least one must be selected to code the extract.
  • A set of nodes identified important cross-cutting issues arising during the reform (eg Academic vocational divide).

When coding, careful consideration was given to the selection of a passage of a source to code:

  • It needed to be useful as evidence and as such worth coding to the nodes representing Delivery, Assessment, Pedagogy, Accountability and Reform process.
  • It should not duplicate evidence coded in the same source; for this reason we decided that in general executive summaries, introductions and conclusion would not normally be coded.
  • We would not code research method so generally sections on methodology and appendices were not coded.
  • The text selected for coding would be sufficient to convey meaning but as far as possible would be short enough to only refer to one idea; NVivo annotations were used to add context when this might not be obvious from the selection of a small item of evidence (such as a bullet point).

Detailed training of the coders has been conducted to cover both NVivo and the coding conventions of REMS. In addition, Peter Goff, has taken on a role of guardian or chief examiner of the coding framework and has regularly reviewed the coding. We experimented with NVivo's automatic tools for intercoder reliability but found that these provided too much detail to be usable. Peter relies on reading the source and using NVivo's density bar to check which extracts have been coded and the relevance of that coding. A coding policy and training manual was devised to further maintain coherence and understanding between different coders.

For the team the coding is the crucial backbone of the system and if this is not controlled and reviewed, it could jeopardise the system’s reliability. Therefore, the team devised strict quality control measures, managed by Somia, for the coding as well as checks on all the systems features e.g. classifications and set development. Some of these were manual checks conducted by the team and others were query checks developed in NVivo. These checks of the coding and whole system are conducted on a quarterly basis.

Difficulties in Coding

The intense training provided to coders, as well the quality control procedure utilised, has helped to make the coding successful. However, the team encountered some difficulties with the coding.

  • As we were incorporating a range of different sources, authored and written in various styles this sometimes made the coding slightly problematic.
  • Both the size of the passage and the coder's desire to record all aspects reflected in the extract, sometimes lead to over coding.
  • As we coded, some of the nodes definitions needed revising.
  • The coders' knowledge and expertise of the education and skills sector affected the coding.

Qualitative coding is not an exact science and is open to interpretation, but we feel that the node definitions, the structure of the nodes, the training and quality control by the team, have resulted in an adequately good consistency.

Merging the Project

NVivo is a single-user system so to have multiple coders working on the project it was necessary for each coder to have their own project database and periodically to merge these into a single master project. This process was managed by Somia, who created batch projects for the coders containing only the coding framework and the sources for them to code. Each coder used the standard coding framework, coding to 'Other' nodes where necessary. Once the coding had been completed, the batches were merged into the master REMS project.

This merge process worked well for the batches. When some recoding was done on the master project as part of one of the quarterly reviews, it was difficult to merge the master projects because:

  • it was difficult to isolate the different parts of each master project that needed to be included
  • the size of the master projects made merging very slow.

As a consequence, subsequent recoding will take account of the merge issues. 

Analysis process

While the management of evidence is a key component of REMS, making use of that evidence, in an effective and efficient manner, is also crucial. It is in analysis that the naming convention, classification and coding of the evidence is so vital and the reason why we were so thorough when setting these up.

The system became operational in October 2008, and at this point the first quarterly analysis was produced for stakeholders. At this point some problems were encountered and key questions were raised, these were:

  • How do we analyse the evidence in a productive manner?
  • What reporting is going to be useful for the stakeholders?
  • How do we account for gaps in the evidence?
  • How do we handle the high volumes of evidence?

In order to make the analysis and reporting constructive, the team decided to report in a consistent way - via strand sign-off, thematic and quarterly analysis and reporting. We devised a standard reporting template for the various analyses reporting. To ensure buy-in and validation of the evidence in the system, strand feedback and sign-off reports were produced for the relevant strand manager. This gave the strand manager the opportunity to comment on the data in the system and any resulting emerging themes. This helped us to ensure that all systems processes were working and the right content and conclusions were reported.

Colleagues and stakeholders key requirements of the system were to ensure the evidence was robust, analysis comprehensive and technical jargon was avoided. This was one of the key challenges: the system had so much valuable evidence but we needed to ensure the right evidence was selected to answer the query or question.

The query feature in NVivo8 makes the following possible:

  • Searching a set of sources using the 'Find' function.
  • Google type searches, on a selection of all sources and/or coding.
  • Using the coding framework to retrieve relevant evidence – using 'coding query' and coding tree.
  • Making connections between sources, coding and attributes to equal qualitative cross-tab – using 'matrix' function.

Once the team understood the capabilities and restrictions of the system and NVivo we needed to ensure that the system allowed us to undertake analyses, regularly and on an adhoc basis, in the following four ways.

Fast and effective searching in a complex evidence environment

The Research and Evaluation team are often asked to produce a quick briefing or snap shot on particular topic or issue. Colleagues want to know:

  • What do we already know?
  • What the current position is?
  • What is all the evidence saying?

For these requests, time is the critical issue, as a response is required immediately or within days, but, at the same time a comprehensive response is called for; where Government, stakeholders, partners as well as QCA's perceptions and policy are all considered.

The coding framework was developed from the policy and as a result it is often fairly easy to select the theme from the coding tree and analyse the information using attributes and the query features of NVivo to limit the evidence to the appropriate context. Example issue: Rising the participation age to 18.

Identification of emerging themes and issues, including areas where evidence is scarce/missing

As the coding framework was constructed using QCA's strategic objectives, when the evidence is collated and coded, it becomes apparent where themes are emerging (eg Apprenticeships) or gaps (eg Pre-14 education impact on the reform) exist. Further detailed analysis establishes the exact themes and gaps in the data.

Development of evidence-informed policy during hectic period of early implementation/ evaluation

A key feature of the system is to ensure any strategic or policy decisions are based on evidence. So, current, complete and reliable evidence must be in the system, to ensure implementation issues are considered and flagged at the earliest opportunity. At the evaluation stage the secondary evidence held in REMS, combined with other primary and statistical data, ensures that evaluations are an accurate reflection of the programme areas.

Single evidence base which can be probed by different stakeholders to provide tailored perspectives

The system was designed to include not only QCA published evidence, but also relevant evidence produced by stakeholders and partners. As a result the evidence held in the system is of value to a range of stakeholders and partners and so they have been given the opportunity to use and query the database, for their work. Example queries: Learning Skills Council request for information on delivery issues and internal impact of the recession on education and skills.

In order to assist users (both internal and external) to use the system the team have constructed an analysis process manual which outlines the ways in which data can be retrieved and used. The diagram below shows the process map from the manual.

Process map 

Reporting the project

Reporting and reviewing is a key component of REMS as it allows the team and stakeholders to assess the reliability, robustness, usefulness and comprehensiveness of the system. A thorough reporting strategy has been devised and implemented, to ensure the analysis is consistently reported and shared with stakeholders.

The key aspects of the strategy are:

  • To produce quarterly reports which are precise and not over burdensome for the group interested in the analysis area. These reports are between 2-8 pages, in the style of a briefing paper using plain language, with technical information kept to a minimum.
  • Quarterly reports include a 'What's in REMS?' report and thematic reporting on emerging themes or core policy areas.
  • Ad hoc reports as requested by colleagues or on burning issues which require immediate attention and need to be communicated instantly.
  • Overall evaluation at crucial reporting points to report to the government department – the system is designed to report on a continuous and quarterly basis, but it is also to be used at crucial programme, end of year, review and evaluation points.
  • Validation of the analysis - the findings in the reports are always validated by the strand manager or the relevant expert in QCA. Once signed-off the reporting is then communicated to Senior Management on a quarterly basis.
  • Reporting online. Now that the reporting process is finalised, the REMS team are designing an online web page for the QCA intranet and website. This is being developed as another avenue to communicate 'What REMS is?'  project progress and evidence reports.
  • All analysis and reporting influences the direction and development of the database, and therefore, each quarter the system is reviewed and evaluated. It is at this point that any changes to improve the system are made.
  • The system is never static and once reports are produced, this is not the end point; reporting may result in more detailed analysis of an area highlighted in the report or an update required next quarter. This continuous reanalysis is made easier by saving the NVivo queries that are used for the reporting.

Making use of REMS

As a result of the intense targeted reporting and communication the team has generated immense interest in the REMS potential and usage.

The success of the system has enabled the team to provide a service to external stakeholders, which is one of our major achievements. The team had from the onset deemed it important that REMS should be used by both internal and external stakeholders. During the exploratory stage of the project, it became apparent that a service of analysis and reporting would need to be provided to allow evidence reporting for stakeholders. It was clear that it would not be plausible for users to directly use the NVivo system because of the difficulties of providing appropriate access to the database and the skills needed to get the best out of the NVivo system.

In addition to work for the Qualifications & Credit Framework, Adult Skills and Lifelong Learning & Skills programmes, the success of the 14-19 Reform REMS project has led to requests from other programme teams in QCA to devise similar systems. This is an impressive achievement, but the extension to other areas has resulted in the following questions:

  • Do we incorporate all programme areas into one and have one REMS system?
  • Will the coding framework and other functionalities work for these different programme areas?
  • If the systems are kept separate how do we handle evidence sources that relate to more than one or all areas?

Work has started on other programmes and the decision has been taken to use individual systems for each programme but to design them with the potential to merge into one REMS master. This will allow us to review and evaluate across the programme, core policy and thematic areas; to understand and determine issues, good practices and to ensure policy is evidence based.

References

Reform of 14-19 Education

DCSF (2005) 14-19 Education and Skills White paper.

DCSF (2005) 14-19 Reform implementation plan

DCSF (2007) The childrens plan

DCSF (2007) Raising the participation age in education and training to 18

DCSF (2008) Delivering 14-19 Reform Next Steps

DCSF (2008) Promoting achievement, valuing success a strategy for 14-19 qualifications

Evidence-based policy

Greenhalgh, T. (2004), How do policymakers use evidence – what's the evidence? bmj. p1-3

Lee, J. (2004), Is Evidence-Based Government Possible?, Cabinet Office,

Marston, G. and Watts, R (2003) Tampering with the evidence: A critical appraisal of evidence-based policy making

Mulgan, G. (2003) Government, knowledge and the business of policy making

Sanderson, I (2002), Evaluation, Policy Learning and Evidence-Based Policy Making, Public Administration, Vol 80, No. 1, pp1-22

Solesbury, W. (2001), Evidence Based Policy: Whence it Came and Where it's Going, ESRC UK Centre for Evidence Based Policy and Practice, University of London, pp1-10

Use of NVivo

Bazeley, P. (2007) Qualitative Data Analysis with NVivo, Sage: London.

Lewins, A. & Silver, C. (2007) Using Software in Qualitative Research. Sage: London.

Richards, L. (2005) Handling Qualitative Data: A Practical Guide, Sage: London.

Looking back

Clare Tagg, July 2014

1             Four years on: research developments

The REMS project continued to be developed following the last report.  It survived many changes in personnel and organisation.  As part of significant public sector cuts, the organisation that developed REMS, the curriculum authority, was disbanded.  As a consequence the focus of the documents stored in REMS changed over the years and there were some reorganisations of the coding structure.   Despite increases in the number of documents stored in REMS, the budget associated with the project was reduced so that many of the documents were not coded.

Software upgrades to NVivo have continued to be adopted and this has made it much easier to incorporate PDFs.  Although the server version of NVivo was installed, this does not appear to have been fully exploited, probably because there is no longer the need to have multiple coders working on the project simultaneously.

REMS continues to be regarded as a repository of evidence to support developments in educational policy in the UK.  The original project used the qualitative coding to access relevant text references to investigate the research available about a topical question.  The coding on multiple dimensions allowed the precise selection of relevant references.  With the reduction in coding, there has been a greater reliance on using text search on the uncoded documents to identify more recent relevant references.  This means that some references retrieved are not relevant and there is duplication; a document often contains the same ideas in the executive summary, introduction, the body of the report and the conclusion (in the original coding, only the body of the report was coded).  Despite this, REMS is valued because there is no other way to quickly access comments made in a broad spectrum of research, policy and evaluation reports on a particular topic.

2 In hindsight

REMS is an unusual project for a number of reasons:

  • The source documents are all reports so have a formal structure and language; this makes them amenable to text searching but ideas are frequently duplicated within the reports.
  • The number and size of the documents is substantial (over 1000 documents, many over 100 pages).
  • It is a repository of reports, designed to be used to answer a wide range of ad hoc questions about education.
  • The original coding was designed so that relevant references could be retrieved using queries.
  • The project has been long lasting with significant changes in organisation and personnel.

 

In hindsight, I would have kept the project simpler and more straightforward to make it easier and quicker to import documents consistently.  This would have made it more resilient to changes in personnel. The original REMS project relied on document folders, document naming and coding as tools for retrieval.  Given the functionality in NVivo at the time and the general overall lack of advanced NVivo experience within the organisation, it is hard to see how any other techniques (e.g., attributes) could have been used. 

An alternative approach would have been to only import portions of documents (e.g., conclusions) and to rely for access on text search so that no coding was necessary.  This would have kept the project small (the size of the project frequently caused speed issues; I remember setting a query to run and after waiting 30 mins, taking my computer home and letting the query finish running whilst cooking the dinner).  By reducing the duplication of concepts in any one text, it might have been feasible to dispense with coding altogether, just using text search to identify relevant texts.  One drawback with this (which I think we did debate at the time), was that it would not have been so easy for the researcher to access the more detailed arguments behind the conclusions.  One of the strengths of REMS is that it brings together a wide variety of supporting research and policy reports in an easily accessible form.

An alternative would have been to use broad brush coding to identify sections of a report so that text searches could be run on those nodes.  The main problem with this approach is that researchers have a very thematic attitude to coding (supported by NVivo) and it is likely that this approach would not have survived the changes in organisation and personnel.

3             Software tools

There were several technical issues during the lifetime of REMS particularly surrounding the import of PDFs and performance, but one of the strengths of NVivo is that the project survived many upgrades and changes in the environment without loss of data.

Is NVivo the right tool for this type of project?  Probably not.

  • NVivo has become a large piece of software with many ways of using it.  This makes it very powerful for experienced researchers but it can be difficult for more casual users to understand how a project is structured and how to best access the data.
  • Despite the extra functionality in NVivo, it is frequently used as a thematic coding tool and thematic coding is on the whole not needed for a project like REMS.
  • One of NVivo’s strengths is that all of the sources are stored in a single file on a local drive, but for a large project like REMS with many documents, some form of fluid cloud storage might be much more appropriate.
  • One new feature recently introduced into NVivo, automatic coding based on coding patterns, has the potential to be very useful for a project like REMS but I do not believe it has reached sufficient maturity to be useful.

For future projects like this, there are an emerging set of Big Data technologies being developed to make sense of datasets much larger than REMS.

Author profile: Clare Tagg

Unusually for a qualitative researcher, I have a BA in Mathematics and an MSc in Computer Science. My career includes working as a mathematician for the Bank of England, as a researcher at the London School of Economics and as a lecturer on computer science and management courses at the University of Hertfordshire and the Open University. At the age of 40 I became a full-time student again and began a PhD exploring the human aspects of software development via the longitudinal study of large software developments. During my PhD I became immersed in qualitative research and the use of software so that my PhD took 10 years to complete!

Together with other members of my family, we formed the Tagg Oram Partnership in 1991, to help people effectively exploit information technology. For the last 20 years, I have been a partner specializing in helping research students, university research projects and organizations undertaking contract research to use software to support qualitative research.

My approach with any research team is to consider first the nature of the research (the research question, methodology, goal and timescale) and the researchers (their expertise and research background) and then to show the team how to use software to further their aims. I have worked with The Qualifications & Curriculum Agency for many years on a variety of projects and have assisted them with their use of previous versions of QSR software (N4, N5, N6, NVivo7) and Sphinx.

In addition to my work on qualitative software, I also manage software projects which exploit new communication technologies. Together with other members of the partnership I am working on how these may be applied in the research community to support collaborative working and communication of research results.

Contact: 

Clare Tagg, Tagg Oram Partnership
 

Clare Tagg

Author profile: Somia Nasim

I would describe myself as a contemporary researcher and analyst developing and learning all the time. I'm not as experienced as my colleagues Peter Goff or Clare Tagg, but for me they are both great mentors and I've learnt much from working with them.

At the time of this report, I was working at QCA, as a Research Data Manager, managing and directing a range of innovative evidence and data management projects, strategic research reviews and research process projects across QCA programmes, including the 14-19 Reform, Qualification Credit Framework, Adult Skills and Curriculum.

My interest in research started from university, where as sad as it may sound; I really enjoyed conducting both my under and post graduate dissertation. At this point I had to make a difficult choice either to do a PhD or venture into the working the world. As you can see the avenue of work won but I'm still hoping to do a PhD.

My first job was a graduate placement for a year at the Learning and Skills Council, it was here that my interest in research and educational field developed. From here I moved to strategic management and research consultancy working on a range of projects in the skills and educational field, I got the opportunity to develop my research and project management skills. After a short stint at one of the Sector Skills Council – Energy and Utility Skills, I moved to QCA. At Energy and Utility Skills I managed the Sectors Skills Agreement national, regional and sub-sector analysis and reporting. This included the examination of a number of different datasets using SPSS, literature review and primary research.

I am an experienced researcher with a comprehensive knowledge and understanding of the skills and education arena. In terms of research I have experience of research management and planning, policy development, technical research skills which include qualitative and quantitative methods, research instrument design, data analysis and interpretation, MIS development and reporting.

Author profile: Peter Goff

I got into research while teaching in FE during the 1980s when our traditional students were being replaced by huge numbers of unemployed adults. I obtained funding for a 2 year project to focus on the experience of this group and learned research methodology in the process. I was lucky in that this covered statistical methods using SPSS, qualitative ethnographic techniques and was supervised by hard bitten professional field researchers in Local Government. I moved on to work with an LEA and somehow ended up in BTEC as monitoring adviser. Here I had the tremendous good fortune to work with Professors Desmond Nuttall, Harvey Goldstein and Carol Fitzgibbon on some groundbreaking projects to monitor grading standards. Probably still the best work I have done. I also learned much from Professor Gordon Stobart who I worked with at NCVQ.

At the time of this report, I was currently a senior educational researcher for QCA and my work is primarily focussed on 14-19 reforms, Functional Skills and the new qualification frameworks across the UK. In the 5 years Prior to joining QCA I was an independent researcher undertaking work for EDGE on vocational pedagogy, the NAA on bureaucracy in assessment and City and Guilds on the role of government agencies. I led the team working with Awarding Bodies on Strand 4 of the VQRP and was an Associate of the Research Centre for the Learning Society at Exeter University on performance based assessment & grading.

While all this might sound interesting to the research community, it sounds really boring when asked "what do you do?" at parties. So I tell them instead I climb mountains, play the ukulele and fly micro light aircraft instead. I recently qualified as a pilot and I can tell you this is a much more interesting opening line.