There are plenty of tools that are designed for collaboration and remote work. However this is not the same as co-creation.
designers and developers will need to learn to co-create cooperatively. This is not the same as collaboration, where small or large teams work on a certain product or outcome. Cooperative work involves multiple individuals and groups working within a common environment or infrastructure, and helping support that network or infrastructure for mutual benefit, while working on different objectives or outcomes.
Downes, S. (2019). A look at the future of open educational resources. International Journal of Open Educational Resources, Vol. (2).[www.ijoer.org/a-look-at...](https://www.ijoer.org/a-look-at-the-future-of-open-educational-resources/)
nodenogg.in allows you to create and edit notes and attachments with a group of designers / makers in a shared digital space, these notes and attachments can be placed visually in a spatial arrangement and connections draw between them to provide cluster and connected relevance to a problem or research theme you are trying to investigate.
nodenogg.in is not an online collaborative tool but is being designed to augment the physical design studio and should be used together allowing you to capture collectively the teams thought and research process. The tool is design to support co-creation of resources and thoughts using a design thinking approach. The main initial application is to support project based learning, within a design school enviroment.
All notes and attachments sync to all devices, so once the design session is completed all parties have local copies of the distributed data.
nodenogg.in was created with design education in mind. If you would like to get involved is using and even contributing visit https://discourse.adamprocter.co.uk to join the discussions and find out more.
I am fortunate to be a member of the Digital Education Working Group (DEWG) at the University of Southampton, although I wont go into too much detail, its worth sketching out the group a little, we are hoping to become a committee one day.
The group comprises of individuals from across the University in Academic, Research and Professional service roles who have a keen interest and background in technology and learning. I tend to be there to lean on the Humanities side and get us to think about the people using the technology and how we leverage and delight the people in our organisation to better deliver world class education. We as a group try (very hard) not to get sidelined by what technology we should use or deploy but consider the overarching aims of empowerment through technology and how to better support our staff and ultimately our students as they enter a constantly networked and digitally connected world, no matter their core discipline of study. So I tend to be talking about digital literacies and the pitfalls of navigating our information society.
We meet once a month and initially our work has been quiet reactionary, with thoughts and reflections on the roll out of services like Office 365 and one larger piece of work was providing recommendations to the University Education Committee on what they must be doing to meet the new web accessibility laws as they come into effect for public organisations within the UK.
However our main task has been to discuss and create a vision for Digital Education at the University of Southampton which would connect to and inform the Universities Education Strategy, we have debated a lot about the word digital as we think everything is digital and this type of thinking should just be inside the Education Strategy however that is something that may take a while to change. For now we need a vision and some factors around what makes a great educator at the University of Southampton.
So at the last meeting after an update on some of the accessibility work being carried out internally to make our Blackboard Inc. installation more inclusive we where tasked to use either post it notes or a Padlet in groups of 3 to revisit the vision and connected topic, our group was assigned the topic educator.
It dawned on me why use Padlet when we could use nodenogg.in instead, this would also test my theory around being able to “spin up” an instance easily and to instantly work on an idea in a group. I managed to persuade the group quickly to not use Padlet and test nodenogg.in. Within less than a minute we where all hooked up to a new instance, by following my simple guidance. We started working on co-creating some thoughts on the task at hand. The only blip was Microsoft Edge on a Dell XPS (with touch screen) seemed to have some issues creating or joining an instance, but quickly switching to Chrome seemed to resolve this. Also the Dell machine seemed to have issues with the double clicking to edit as well, which didn’t really matter for the task at hand but was odd. The other member of the group seemed to instantly understand how to use nodenogg.in and very little guidance was needed. I also later made a special direct link to this instance and shared it with the whole digital education working group. We shall see whether anyone else adds to or creates more connections remotely. One of the questions was why use that when we have a Padlet, my initial answer was because nodenogg.in is Free Open Source Software (FOSS), which quickly resolved the argument.
As an aside I noting afterwards Padlet’s privacy policy which we would be agreeing to just by using it. Here is a quick peak at some high level points it makes, some designed to improve the service but most ultimately connected to adverts. [footnote]
track behaviour on Padlet
show third-party ads
we collect your IP address
we may also obtain information, including personal information, from third-party sources
collect device-specific information such as:
device brand, version, and type
operating system and version
browser type and version
screen size and resolution
battery and signal strength
show ads about Padlet outside of the Service (e.g. on LinkedIn)
[/footnote]
So by choosing nodenogg.in we kept all the data to ourselves and this was all stored locally on the laptops in the room, this got some approved nodding in the group and we didn’t inadvertently share any of our device details, IP address or provide data for 3rd party advertisers. A small win for nodenogg.in I would say.
Main take aways
Being able to quick create a shared (private) digital space was as easy asa I hoped
Shared digital space inclusive by design, you didnt need an account or must use 'twitter'
Very little explanation was needed, can I / do I need to make thiz zero
I have recently found myself sharing again a link from Nov 2017 about some of the prototypes back then, so it become apparent I need to provide an update on the tech stack being used now within my project, without the need to dive into the code itself.
The stack is now Vue.js with Vuex connected to CouchDB via PouchDB. The resulting information is rendered in html using a number of components and standard SVG elements to create interactive views. The use of additional plug ins has been removed unless necessary, this helps keep the project inline with the GNU Licence.
The biggest change is to move to CouchDB and PouchDB this has replaced my previous use of deepstreamHub which initially looked like a great open source alternative to Google’s Firebase however the deepstreamHub which was a cloud instance of deepstream stopped working and hasn’t been updated in a long time. I made a number of attempts to run my own deepstream, which was one of the reasons for picking the platform, the open source server tech, but trying to get this running in the same way the hub version worked proved to be fruitless.
It become very apparent this was a major block in the project. I reached out to a developer friend of mine to see what else he could also locate out there, we chatted over the general project aims, realtime nature, the vue.js json style structure and had been useful within firebase, we also discussed the ownership of the data so he went off to think about what could provide the underpinning structure I wanted.
He came back with the suggestion of CouchDB and subsequently PouchDB which would also answer my need for local storage and offline capabilities. After making the changes to the project to use PouchDB and CouchDB I also was pointed towards an interesting article from ink and switch looking at the development of local first software and the concept of owning your data, in spite of the cloud, which chimed very much with my research [footnote]They have very recently released PUSHPIN which is a collaborative spatial interface tool.[/footnote] and although they seemed to rule out CouchDB, due to some concerns over conflict resolution and its ability to do realtime, I have not hit those kinds of issue and have realtime collaborative capabilities working.
A little gif showcasing nodenogg.in’s use of realtime collaboration.
This testing session was carried out with year 3 Games Design & Art. This test was using nodenogg.in version 0.0.24d (YouTube explainer). The students were presenting a series of potential game ideas to each other, some of them in teams. The format for each was a 12 minute presentation of their 3 game ideas with accompanying slidedeck. The other students used a nodenogg.in instance to comment as the presentation was happening and afterwards for a few brief moments. The one main question I posed to each team to outline that could be answered collectivly via nodenogg.in was what is there biggest struggle issue? For example choosing which game idea to go forward or another aspect that the “crowd” could help on. In reflection from this session I think a more focussed use of nodenogg.in on the question at the end could have worked better as I noticed that contributing live specifically with the spatial view only on show was much harder, the cognative load was high, to listen type and see spatially. This was in part to removing the list view, this again gives rise to the idea of a series of views that work better for different types of sessions for using nodenogg.in. Although some of the connections and such started to be draw as seen in this screenshot, even with the shortcuts, the crowd couldn’t think that fast.
Also I felt much more this time as I was involved in thiking and responding during this session as well I think I miss some of the issues students hit with nodenogg.in. In future I need to either be recording the session in a way that is useful for my own reflection or get another staff member to use the tool in a perscibed way with students while I just observe the use. We have a big session set for the end of January which I need to prepare the use case / cases so I can gain the most useful feedback from this as much as possible. This will likely include blocking in some time to get students to write feedback into discourse. May need to use Microsoft Forms as discourse is public and students need to join to complete, which is a barrier for sure
This testing has shown that the spatial view is a slower thinking space, which needs to be coupled with a quicker throw thoughts into a bin exercise.
For this work we first need a bucket collect mode. Then we move into a spatial view where these are first neatly arranged[footnote]Some type of initial auto placement. based on entry time perhaps?[/footnote] and then facilitate a spatial process on the ideas, discarding some, clustering some, connecting some and making new informed and more details inputs into the spatial view.
The spatial view does needs to trim up the text, but there has to still be the ability to glance at the information and arrange as having to keep opening a reader view may be too slow even in the spatial mode.
For now I’ll call these two actions modes, Bucket mode and Consideration mode.
Main take aways
Bucket mode to be turned on.
Reader view needs some work.
Gathering more feedback in sessions is really important.
I was keen to eliminate the need for any type of log in, as storing usernames and passwords would be problematic, this is due in part to one of the privacy design principles for nodenogg.in in that the system must only store data it needs to know and then any said data should be encrypted and decrypted to the owner of that contribution. Also this would require the process of signing up, which would drastically slow down the ability to just point a group of designers/makers to a URL and start working together, this would also rub up against one of the other principles of delightful design, signing up and saving passwords even with 1Password is not really delightful.
nodenogg.in does however need a way to identify contributions and so it uses the value attached to the device name (client id), this is decided by the contributor when they arrive at the initial URL for the first time, this is then used as the name of each document[footnote]CouchDB’s data structure uses documents instead of tables and is formatted as Javascript Object Notation (JSON) which also easily matches vue.js’s data structure.[/footnote]. When you decide on a device name this creates the new document which is the data structure for contributions. Clustering contributions into documents to a device enables differences in read / write access to said data and enables the contributor to easily remove, export and single out their own contributions. Other data such as positions and connections are stored as separate documents to simplify the way to create and manage shared views.
I had specifically been looking at the process that was used within micro.blog and I had used to some degree before back in my PHP coding days, that of using a URL with a token appended, this URL is then emailed to you example.com. This would negate the need to store usernames or passwords but would require a way to email said URLs from the server, which I was not keen on, although Sendy[footnote]Sendy is a self hosted email newsletter application that lets you send emails via Amazon Simple Email Service (SES).[/footnote] could have possibly done this and has many options to not track, however this felt overly complicated for what I required.
During some of this researching I was reading around using Javascript Object Notation (JSON) Web Tokens (JWT), which led me to web storage. I soon realised that I could use web storage to store the device name within localStorage [footnote]localStorage is a persistent storage kept in browsers until the user chooses to remove it[/footnote] so that after the initial ‘log in’ vue.js could check on any arrival if this storage was in place and redirect the visitor straight to making contributions on said instance, thus “logging” them in.
When you first visit the URL you are requested to input a “device name” this is then stored on your browsers local storage and enables you to “log in” without the need for a username and password. When you next load the page this token is looked for and if found connects you to the correct document store. Deleting the local storage would require that you enter a device name again, however specifying the same device name would basically connect you to the same document store.
This approach worked really well in the end. In testing students could all quickly grasp the idea of naming their device and would quickly assume pseudo names. I would like some feedback on changing the wording of "device name" to something the seems to be less technical as this could be a bit of a barrier to the intuitive nature of getting up and running, I think it causes a huh moment, a pause and thus a break in user [footnote]I really dont like using the term user but just replacing it with human is odd, I might start using designer/maker [/footnote] flow as each time we have tested I have always said "put in a device name it can be anything you want it to be."
I plan to post more updates to researchnot.es going forward with some more details on the project milestones as we ramp towards the end game, the schedule and thoughts. Although I highly recommend following my micro.blog discursive and specifically the PhD category, you could use NetNewsWire to do this as I document ongoing thoughts and things related to this project in much more casual and regular basis. Here the posts will be milestone documentation. The Official research documents will be on manifold.soton.ac.uk. All feedback welcome at discourse, each post here will have a specific option to pull in comments as well. Also I started using MarsEdit to post to the blog so that should make things much easier.
After using nodenogg.in with the final year students a number of times with a reasonable success. I realised that the session I had planned with our first year students was a week before their very first presentations and that it could be another way to test my hypotheses from the very first test that nodenogg.in could be used to calm the nerves of students as deadlines approach. By making the students anonymously come to the realisation that everyone felt very similar and had the same types of concerns. For this session I also removed the list view from within nodenogg.in and presented just the spatial view on its own, and removed the ability to see device names increasing the level of anonymous interactions.
We posed a question to respond to inside nodenogg.in. What are you worried or concerned about for Friday’s Presentation? As the students started to add comments you could feel the tension lowering in the room as it dawned on them that they all had similar issues, the fact it was anonymous was also again popular. One student also without prompting started to organise and cluster the nodes together spatially as similar thoughts appeared, this may have been prompted by me suggesting this, I am not sure, however interestingly as one student had taken the lead on this task and others didn’t mess around with this student being the designated organiser, although we didn’t know who it was until I was walking around the room looking at students use the platform. I tried to elicit feedback afterwards on discourse but that didn’t work! I will allow time for this feedback to be gathered in the future.
Again I had my mac plugged into the main screen projecting the activity in nodenogg.in, which reminded me of a view mode toggle that would be good as a present mode.
As staff we then talked through the concerns on screen and this made the session really useful for collective reflection and pause with an impending deadline.
Main take aways
Clearly working to help support end of project concerns, realising your not the only one.
I was able to help by talking about the collective concerns.
Could you have an option lock down spatial arranging to one person.
The concept of big screen viewing mode would help.
Reflective use prior to deadline.
Allocate time for discourse feedback (make students do it).
This test was where I introduced the spatial mode and the list mode together for the first time. Students also started to use the different types of nodes including the link node and attachment node. However viewing the attachments and linking out was not possible.
Students started automatically moving around the objects in the spatial view, which was good to see, this seemed to be more on ability to view them rather then to order them to start, I also think the number of students contributing dropped. I think the main reasons that links and attachments where added this time versus the last was there was a view of the types now, in the spatial view but also I specifically asked for students to think of links and attachments and pointed out how to edit the Create type and that they could use Add to add(upload) files and images from there own devices.
Main take aways
Need quicker way to detect and add links
Attachments could do with drag and drop
If link pasted in is to image it should somehow upload as an attachment
Even with the simplest additions to nodenogg.in testing with people is crucial, I know this is obvious but with a tool that has multiuser capabilities testing on your own is impossible, which means you cant fall to often into the trap of assuming things will be used in a specific way. After the previous testing I wanted to get the ability to make connections up and running as fast as possible in nodenogg.in. So I focused on adding this ability. I then added a number of buttons and keyboard shortcuts to also speed up the process of moving in-between interactions, create, finish, connect and zooming. I took the updated version to a team of 4 to see how create nodes and create connections I could see that they did some unexpected things. Firstly I had mapped the controls to CTRL but these conflicted with the browsers default, I had been using macOS, 3 of them had Windows. So I quickly changed that to Shift, which of course introduced an issue with Capital letters triggering the shortcut by mistake, also CMD is not considered a modifier key so I couldn’t have macOS style shortcuts. I am not sure on the best way to solve this yet.
Once I had the shortcuts working what I was not expecting was when a person went into connection mode they might then start dragging nodes around again, I was expecting them to be just interested in connecting nodes. This was easily fixed by turning off connection mode if someone started dragging icons.
Main take aways
Popping into the studio to quickly test a small function is very useful
The next test of nodenogg.in was with the returning Year 3 Games Design & Art students we had a slighly updated version of nodenogg.in however I disabled the spatial view as this had connection and arrangement issues and so I just displayed the shared text list view. In this first test of the new academic year I replaced my previous use of an etherpad with nodenogg.in which is also why I was happy with testing just the list text view.
Students where presenting three themes they had been researching in a 8minute presentation while all the other students where encouraged to connect to nodenogg.in and respond live with text commentary to the presentation. Students had to select one theme to take forward and deep dive into. So other students were also encouraged to vote.
Students appreciated this approach as they tried to help each other with ideas to follow up the theme and which theme to select. We could have just as easily used etherpad or word online, but these don’t offer the simple anonymous approach and wouldn’t test nodenogg.in to see what works and what was still causing usability issues. In this version after each presentation I had to copy and paste the responses into a text document as the system only had the capability to connect to a hardcoded instance.
Main take aways
unlike etherpad students couldn't all write together as list view showed each student as a block
students didn't like to create more that one thing they just typed
no styling or use of line breaks which was not good
there was not easy was to vote against current text again due list view being blocks
spatial view and connections may resolve the above
After reviewing some of my explanation videos I realised that I needed to keep a more accurate record of what where the updates to nodenogg.in and when they actually occurred. This would help mr to use the clips as reference material to any testing that takes place, so I can see which version was used within testing but also it would enable me ton look back at the project and review and record why changes had been made. My previous Youtube Video’s had been a little haphazard in this regard and I needed a more logical system.
I was listening to Accidental tech podcast and they mentioned the semantic versioning system to better describe updates, so I applied this to my versioning system. I also found some of the built in features within Open Broadcast Studio to stamp on screen the date and time. I am still using the same process to stream the recording live to Twitch and afterwards download the video to be upload to my YouTube channel, there is a seven day window to download from Twitch, which I do need to be aware of. Brent Simmons developer of NetNewsWire whose contribution notes I have taken great inspiration from also mentioned adding a letter to the build to signify its state of play as well, so all versions currently end with the letter d for development. For now the alpha build however is also a mirror of the dev build.
These small changes I hope will help provide a usable level of the documentation of the thoughts and ideas from each version and I expect the explainer videos to get shorter as I just cover updates and why those choices have been made.
URLs
development is at [dev.nodenogg.in](https://dev.nodenogg.in)
alpha is at [alpha.nodenogg.in](https://alpha.nodenogg.in)
beta is at [beta.nodenogg.in](https://beta.nodenogg.in)
release will be at [nodenogg.in](https://nodenogg.in)
Main take aways
Need to automate build process to each URL asap
Need to work on safe guarding data in beta and alpha
In June nodenogg.in was first tested within a design studio setting specifically with a group of final year BA (Hons) Games Design & Art students. The basic parts of the system where in place with the realtime sync between Vuex, PouchDB and CouchDB working as I had planned.
The main workflow is to enter an instance, there was a pre-made instance[footnote]instance is the term used to denote independence, so groups can work on their own instance of data within nodenogg.in[/footnote] for this testing session. To join and contribute to this instance the students had to specify a device name, this can be any name you like, students used this as a chance to create fun names and to some degree instantly make their contributions anonymous.
In previous work using Etherpad[footnote]Etherpad is an open source, web-based collaborative real-time editor, allowing authors to simultaneously edit a text document, and see all of the participants' edits in real-time, with the ability to display each author’s text in their own color.[/footnote]to do a simular process, I had found students really liked the ability to contribute with pseudo-names, they felt freer to comment and less conscious on being ‘judged’ on their contributions.
So students visited the alpha URL, typed in a “device name” and where asked to basically comment on concerns they had around there the final week. At this stage the only view of the realtime data was a single column text view updating as people typed. Most students didn’t create new notes for each idea they created longer notes and made there own bullet lists or spacing, which was interesting to see, there was no formatting options avalible to them either.
I had my Macbook plugged into the presentation screen as well. Students looked at there device for typing and tended to refer to my screen to see all the data appearing live. This suggests a present view could be really useful.
As students saw people typing up concerns the pace of contributions speed up as everyone become more confident, there was also moments of realisation as students released everyone in the room, teams or not had the same types of concerns and the feedback was this made them feel much more confident heading towards hand in and less “bad” about where they where at with the project. I was able to unpack some of the comments with the group aswell.
Main takeaways
Realtime was appreciated
Big Screen view mode could be added
Anonymous input
Supported cohort concerns (made students feel better)
This document is also on my Manifold instance here
Research Problem
Despite the widespread application of digital technologies in higher education there is scant evidence to suggest that these have had a significant impact on student learning. (Bainbridge, 2014, p1)
Educational institutions spend a significant proportion of their budget on learning technologies each year. However, the underlying metaphor on which this technology is based continues to be the filing system.
These technologies often have sharing features added as a ‘bolt-on’ to core functionality, rather than being built for project-based learning. They are designed to be separate from, rather than integrated within, learning spaces.
Sharing is probably the most basic characteristic of education: education is sharing knowledge, insights and information with others, upon which new knowledge, skills, ideas and understanding can be built. (Open Education Consortium, [www.oeconsortium.org/about-oec...](https://www.oeconsortium.org/about-oec/))
Some thinkers on learning technologies (Watters, 2014:22) talk of the Learning Management System (‘LMS’) as being a piece of administrative software which “purports to address questions about teaching and learning but often circumscribes pedagogical possibilities”. As Downes (2007) notes, the LMS can over-structure the learning experience, conflicting with research and evidence about how students learn.
Design education is, in particular, a very visual field with a requirement for spatial manipulation. Current learning technologies on offer do not augment the physical studio experience, and push educators and students towards commercial, more generic offerings without a pedagogical underpinning.
Students are used to a more ‘delightful’ experience with this kind of software, as evidenced by the quotations below:
Slack is useful for quick & easy non-distractive communication. It is simple to navigate and provides a direct platform in which to contact peers and lecturers, creating channels and direct messaging groups is ideal for a more tactile approach to a discussion. (link)
Onenote is great as everyone can have their own section to put their own information/images on it when working together.
Etherpad definitely proved helpful as everyone put down questions, films, books and other information that has now given me more starting points to research.
Research Proposition
Project-based learning involves collaboration in physical spaces that often cannot be replicated in digital spaces. Through the creation of a spatial interface, engagement with materials and other learners becomes more dynamic and fluid.
There is a dichotomy between tools that are personally owned and single-user by default, and Learning Management Systems provided by educational institutions. The latter offer top-down static file repository functionality and fixed courses, rather than features that support project-based learning.
As a result, this research will begin by examining the fundamental concepts of spatial design, including mind-mapping and concept mapping. It will consider the influence of the design paradigms provided by Xerox’s PARC institute and investigate the legacy of Human-Computer Interaction (HCI) pioneers such as JCR Linklider, Ivan Sutherland, Ted Nelson, Douglas Englebert, Seymour Papert and Alan Kay.
Some information must be presented simultaneously to all the men, preferably on a common grid, to coordinate their actions.” (Licklider, 1960, p9)
Figure 1: Example Sketch of a spatial interface for learning objects
A spatial interface allows users to take advantage of their visual memory and pattern recognition.” (Shippman F M, Marshall C, 1999)
A number of education theories have been looked at and Connectivism (Siemens, 2004) will be specifically used in consideration around the tool itself
Research Question
Can the spatial elements of a Design Studio be replicated in a digital learning environment to enhance deep engagement and collaboration?
Scope
The project will create an interface which will be tested with a group of students over a five week project that will run twice in 2020 and 2021. The tool and the project will be evaluated by measuring the staff and student experience through observation, surveys and outputs.
The entire project process will be captured as it progresses in an open and free software approach and documented at the locations below. The systematic packaging of this process and the application of Design thinking and human centered design will also reveal tool building processes and culminate in a manifesto to design these types of new design led digital tools for enhancing project-based design education.
Feb 2020 - Use with specific board project year 1 Games Design Students
Feb - March - EVALUATE Testing
March - June - ITERATE
June - Sept - REFINE
Sept - Oct - Testing with Year 2/3
Oct - Nov - ITERATE
Nov - Dec - REFINE
2021
Jan - Feb - Use again with board project year 1 Games Design Students
March - July - WRITEUP
August - Oct - HAND IN
References
Bainbridge, A. (2014), Digital technology, human world making and the avoidance of learning. Journal of Learning Development in Higher Education: Special Edition: Digital Technologies, p1.
Licklider, J.C.R. (1960). Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics HFE-1, pp.4–11.
Open Education Consortium, About Page, [Online], Available at: www.oeconsortium.org/about-oec… , [Accessed December 15, 2018].
Shippman F M, Marshall C. (1999), Spatial Hypertext: An Alternative to Navigational and Semantic Links , [Online], Available at: cs.brown.edu/memex/ACM… , [Accessed December 15, 2018].
My proposal for eLearn 2019 was accepted and I was able to present to a good number of academics and students from across the UK. The conference took place at the University of Southampton, Avenue Campus so it was not far for me to go and I made a number of great contacts within Humanities and staff from other Universities that gave positive comment on the presentation.
You can watch the video of my short talk about my PhD work in progress from eLearn 19 here
The full slide deck can also be reviewed below.[footnote]Note the link at the start of the deck https://dctr.pro/elearn19 was for an etherpad that people could use to comment on during the session. My self hosted etherpad install is sometimes offline[/footnote]
As part of the process of a PhD you have to upgrade from the MPhil level to the PhD part. To do so you submit documentation internally. This is then reviewed by your supervisors and in my case two additional academics, I had two as my PhD is within WebScience and thus crosses disciplines. Ian Dawson was my Design/Art internal external and Les Carr was my WebScience internal external. After the documentation has been reviewed you present to the panel of these academics and take questions from the internal external academics, your own supervisors only observe. There is a break and then the internal external academics come back with approval, recommendations or rejection.
Here is the full document I submitted in December all housed on my Manifold instance and below are the set of slides I presented.
I didn’t record my presentation, annoyingly, I had completely planned to but I forgot with the nerves.
The presentation takes the format of a viva, and this was one of the most stressful moments in my life as an academic. I was ‘grilled’ in a number of ways and it was really hard to take some of the criticism. I will not go into detail here however on reflection the critique was well founded, although I still feel a number of my points where not really heard or understood, my presentation was praised as to saving the day and actually making things much clearer to the internal externals, but due to the specific issues they had I didn’t get much chance to explain all the detail as I had to defend and explain some simple concerns. I was scored with recommendations. At the end there was overall excitement about the project but this was certainly held back until after the official paperwork was signed!
I am of course very worried about my final viva which I hope to be Summer 2021. My supervisors however said that this upgrade one had really helped them solidify the cross disciplinary nature of my project and gave them some thoughts on how to package up the written work along side the practical stuff. I must say it will be great to just get on with making next which is what is the encouragement, once I am allowed to progress.
In the end I had some recommendations which ended up in the form of this document to be allowed to progress.