I had recently had the experience of organising for a master course a remote exam and had a very bad experience as many students ended up cheating.
A am going public here because on one side I think it is a good proxy of the tension that many of you must have feel in the post-covid era: on one side, remote exams can be super convenient for you of for the students. Especially in advanced education, where the problem of motivations in remote teaching is less pushing, the option to detach the physical place of teaching and grading to those of student's life is really tempting. But is it really possible ?
In my case I has been “forced” to organise the exam remotely, given the nature of the master program with students physically in different countries. At the same time I had, as a student, great experience (setting aside some technical problems with a specific browser..) with proctoring software/services where the room is firstly “scanned”, the software take control of the pc where you do the exam, the mic and the camera, everything is recorded and a mix of AI/cheaply paid guys somewhere in Asia check that all is ok or he/she report to the teacher. Because everything is recorded for years, a warning is also typically given to students that if cheating is discovered, maybe years later when they are affirmed professionals, the title will be stripped.
In my exam however I didn't use these services. I has been told that the students were “good”, already at the end of their accademic carrear. And when I proposed these system, links included, no-one ever replied.
I cannot on any site find a chocolate tasting experience similar to the one I had, so I describe it here.
First of all, let me say that I only like Lindt chocolate. All the other “supermarket” ones, I just don't like them. The ones from specialised chocolate shops are at best like Lindt but cost 10-100 times as much.
Now, in France where I live there are two 'black 85%' Lindt bars, one of which is called 'sweet'. They are very different, but contrary to the name, not in the 'sweet' aspect.
The 'sweet' has more cocoa butter, when you touch it, it melts a little, becoming oily. The break of the bar is not sharp. Above all, it 'melts' in the mouth, leaving a very good but… homogeneous, flat, constant taste. The other 85%, on the other hand, breaks in the mouth, melting more slowly, but above all, it gives 'flashes' of intense flavour, which may last a few seconds, but are those things that you remember for a lifetime (who knows, one day the 'Antonello bar' will become famous ).
I believe that the 'sweet' is 'easier' to eat for the uninitiated to dark chocolate, but the latter offers greater gratification
Non-vaccination changes the relationship between benefits and burdens between a citizen and society, and so I consider it 'normal' that a number of restrictions are placed on a non-vaccinatee in order to preserve the rights of the collective. But any public action must have this as its objective. It is not as if you can force a non-vaccinated person to blow the trumpet on the balcony at all hours just for “revenge” or to “make life difficult for them”. Any policy restricting their freedom must be justified by the need to limit the risk that the non-vaccinated person poses to society, otherwise they are right when they speak of dictatorship. Then, on the vulgar term, let's leave it alone, it makes it clear that Macron has an elitist attitude light years away from what he considers the “ox people”.
In academia, you have a certain pressure to present / publish your results with some indication of the confidence you have in them, and often you do that by indicating the Confidence Interval, that is the range you claim that the true value of what you are estimating is actually within, given a certain probability (…almost, frequentists please forgive me).
Now, the problem is that it is really, really hard you can account for _all_ sort of variability in the process that you are studying. So at the end what you claim confidence in, is just a little subset of the variability and by putting CI you are deceiving your target to be more confident than what they should actually be.
I have one example: I have a sister that love to consult weather sites… so she can collect the temperature for tomorrow for a given town in several sites and then come with an estimation and a CI. The problem is that most sites actually implement slightly different versions of the same algorithm given raw measures or raw estimations, if not tacking the data directly from the same source (generally, MeteoFrance here). So they all provide roughly the same prediction, and the CI you put is deceiving.
Mathematically the problem is that in setting the CI you assume that each observation is independent, when actually is not. This is why it is important to check and correct for correlation before reporting the confidence you are in your estimations.
So, this week-end I decided to get a break to my Machine Learning course and paint a bit of stuff, including the kid's wood house.
There is a side with a plastic windows, and my task was to paint the wood, trying to go as much as possible with the brush inside the hole between the wood and the plastic:
Now, while I did just started from the top, a spider that was hidden in the hole between the plastic and the wood a bit more down, did the first smart thing: he (or she?) did come out. He learned that if I would have continued, I would have reach his hideout with the brush and killed him. He did a second smart thing: he went on the windows, on the plastic side, not on the wood side. Then he waited a bit, and as I was painting the wood from up to down… he moved across the plastic side to go up “against the gradient” and reach a calm, safe area.
I wander what is the neural structure that even in a such a simple creature allowed him to make the right decisions to maximise his fitness and how much powerful computers we would need to match these abilities.