AI without the BS and code reviews
An excellent keynote video on AI with some great hands on demos and a study backing up what makes code reviews useful.
Keynote: AI without the BS, for humans [Video]
Some really nice demos on how generative AI works and its limitations - and Scott Hanselman is a brilliant speaker.
What makes a code review useful?
Interesting results from a study on effective code reviews. Nothing especially groundbreaking but nice to know the study backs them up. The specific details they noticed are:
- Experience matters: Reviewers who had previously worked on a file provided more useful feedback, with their comments being rated 74% useful compared to 60% useful for first-time reviewers.
- Comment density drops in large reviews: The more files in a change, the lower the proportion of useful comments— suggesting that small, focused changes improve review quality.
- New hires show an improvement curve: Reviewers’ usefulness improves dramatically in their first year but plateaus afterward.
- Certain types of comments are more useful: Comments identifying functional defects or validation gaps were rated over 80% useful, while vague questions and generic praise were deemed useful less than 50% of the time.
- Psychological safety improves effectiveness: Developers were more likely to leave critical and useful feedback in teams with open, supportive cultures.