top of page

Berrijam Jam

Our mission at Berrijam is to bring equity to artificial intelligence by making it simple, accessible and affordable for as many people as possible. Beyond technical innovation, the mission requires bringing together machine learning expertise with the art of storytelling.


That's why we created Berrijam Jam!


A Berrijam Jam is a competition where teams solve interesting predictive or analytical problems using machine learning, computer visions and artificial intelligence. In addition to trophies and bragging rights, there are cash prizes for top teams.

Jam 2024

Problem: Building image classifiers requires vast amounts of data, powerful computing resources, and complex engineering. If we could build highly accurate models with minimal data, we make image AI accessible for a lot more domains and organisations.

Challenge: Create an algorithm to train a domain-specific image classifier using just 5 positive and 5 negative images, adaptable to various applications across different narrowly scoped problems simply by changing the provided training data.


Team: We haven't chosen a name but here

The winning team achieved a mean F1 score of 0.82, calculated across five problem domains.

Runner ups


Team: Dragon 5

The first runner up team achieved a mean F1 score of 0.80, calculated across five problem domains.

Team: Data Probes

The second runner up team achieved a mean F1 score of 0.79, calculated across five problem domains.


F1 scores for the successful teams 

Below is the list of F1 scores for the successful teams. The F1 scores for all other participating teams, not listed in this table, were 0.0

Awards Ceremony @ UNSW

The 2023 Berrijam Jam was a resounding success, featuring 18 dynamic teams and the enthusiastic participation of over 70 students. Discover the winners and explore their engaging video presentations at Berrijam 2023 Winners


Frequently asked questions

  • What will the teams solve?
    Traditionally, building image classifiers requires vast amounts of data, powerful computing resources, and complex engineering. But what if we could build highly accurate models with minimal data? We could make image AI accessible for a lot more domains and organisations. For Jam 2024, teams will develop an algorithm and an automated process to train a domain-specific image classifier using only 5 positive and 5 negative images. By lowering the data requirements for vision problems, we can make it easier for people to use AI for lots of different vision problem. This methodology should be adaptable across different narrowly scoped problems simply by changing the provided training data. For example: Identifying ripe fruit Detecting faulty circuit boards Spotting cracks in pipelines Distinguishing between weeds and crops … and so on. Think creatively! There are multiple approaches, and we encourage you to explore pre-trained models, latent feature vectors, clustering, and even synthetic data generation. Can you use pre-trained vision models and adapt them for each problem? What if we expand the pool of training data using synthetic data generation? Would clustering in latent feature space help distinguish between positive and negative samples? In addition to solving the technical problems, teams will also have to make a short 2-minute video presentation of their work, to demonstrate their story telling skills.
  • Who can Jam?
    Jam 2024 is open to all UNSW students. If you are interested to bring Berrijam Jams to your school or University, please send us an email at
  • How do I register to participate?
    Agree to the Terms and Conditions by filling out the form here . All members of team must agree to this independently. Form teams of 4-5 students and submit 1-Page Proposal by Friday, 5 pm AEST, 15 March 2024 : If you are enrolled in COMP9417: Form your team and submit a 1-Page Proposal via the internal Moodle system to register the teams with Dr Gelareh Mohammadi. Special case consideration for individual or smaller teams may be requested and approved case-by-case by Dr Gelareh Mohammadi. If you are NOT enrolled in COMP9417, you can still participate by filling out the following form - The 1-Page Proposal should include the following: Approach - Briefly describe your approach and techniques you want to explore Owners and Collaborators - Nominate the team member who will work on each part of the project. 4-week Plan - A list of weekly milestones leading to the final project deliverable. Your actual project may deviate from the proposal. Changes from the original plan will NOT impact team scores. The goal of the proposal is to help teams align and get feedback from the Berrijam team early in the process, if relevant.
  • How many people per team?
    Anywhere between 4 and 5, inclusive. So effectively, 4 or 5. If you need special consideration for individuals or smaller teams, send an email to
  • Are there any group discount rates?
    Yes, we do have group discount rates - Contact us at for more information.
  • Do you run AI Readiness workshops for companies?
    Yes, we do. Send us an email at to discuss and save on a company bundle.
  • Do I need to bring my laptop to the workshop?
    Laptops are not required. We will provide any material you need for your workshop.
  • Who is this workshop NOT for?
    AIR workshops are NOT designed for technical audiences. If you are looking to code or understand what model or algorithm to use, this workshop is not for you. We expect no technical knowledge from participants. This workshop is not suitable for researchers and software developers who are interested in technology, rather than business.
  • Who is this workshop for?
    AIR workshops are designed for business professionals and leaders, who like to think strategically to make things better, faster or smarter. We expect participants to understand their business’ needs and priorities. We expect participants to be strong communicators, collaborators with the ability to influence organizational strategy. Participants are typically executive, senior leaders, managers, entrepreneurs, program managers or project managers.
  • Is lunch provided?
    Lunch will be provided as part of the workshop.
  • Are there any prerequisites to the workshop?
    No, there are no prerequisites to the workshop. Just wear your inquisitive hat and explore how you can apply AI into your workplace and make your life easier.
  • How is a Corporate AI Readiness Workshop different?
    Designed specifically for each organization, Corporate AI Readiness workshop not only assess your company's unique AI Readiness, it also strategically enhances your team's ability to harness AI effectively. Through hands-on sessions and expert guidance, we'll cover essential aspects from technological infrastructure to strategic AI deployment, ensuring your organization is not just AI-aware but AI-ready. Send us an email at to schedule your Workshop and ask us about our group discounts.
  • What happens if there is a tie?
    In addition to F1 score, submissions will be evaluated on: Storytelling - presentation and storytelling are vital for machine learning experts in a professional setting Novelty - creative application or novelty of approach. Technical execution - sound and structured approach to model and validate the results Validation of Submission in terms of: Ease to re-create the findings Quality of documentation Quality of code
  • What are the prizes?
    The top team will be awarded a cash prize of AUD 4000 to be shared amongst the team members. There also two cash prizes of AUD 500 each for the two runner up teams.
  • I don't have permission to access or upload to the Google Drive folder?
    The submissions are made via Google Drive folders that are specific to each team and assigned permission based on the student specific email provided at the time of team registration. This is typically the email ending in or In previous years, some students faced access issues because they were trying to access the Google Drive from a non email such as their personal gmail account. As a result they do not have permissions to access the folders. Please make sure you use the correct email address to set up a Google Account. The instructions on how to do that are available here: Create a Google Account. Update: Steps to Setting up the Correct Account 1. Log out of your existing personal gmail/google account. See instructions from Google to Log out of Google. 2. Create a Google Account using your student email - where XXXXXX represents your student ID. Follow the steps on "Use an existing email address" here from Google on Creating a Google Account. 3. Sign in with your newly created Google Account and ensure you are logged in as 4. Now you will be able to access the folder for your team where you have already been assigned access. If you request access using personal email or one that was not used to register the team, the request for access will be denied.
  • Do I submit one or three versions of code - one for each problem?
    Only one version of code, which should work for all five (5) problems - three (3) that you have sample data for and two (2) additional that you will not get, but will be similar to the three problem types. The intent of the challenge is to come up with a single algorithm (i.e. set of code) that can work across multiple types of problems.
  • How is a team submission judged?
    Winners are selected based on their solutions’ aggregate performance on evaluation datasets using the F1 score, adjusted by overall submission quality across documentation and presentation. For example, teams with poor documentation may be penalised, and teams with better presentation may be boosted. Each team will submit python code, models and libraries to demonstrate their solution works for any domain. Berrijam’s evaluation process will call the team’s and scripts (template to be provided by Berrijam) for five (5) or more domain-specific problems. For each domain or problem, the evaluation process will use the submitted code to: Train a problem-specific model using 10 training examples - five positive and five negative Generate predictions for evaluation examples for that problem - up to 50 images - and evaluate the performance using the F1 score. In addition to F1 score, submissions will be evaluated on: Storytelling - presentation and storytelling are vital for machine learning experts in a professional setting Novelty - creative application or novelty of approach. Technical execution - sound and structured approach to model and validate the results Validation of Submission in terms of: Ease to re-create the findings Quality of documentation Quality of code
  • What goes in a submission?
    Submit your final code and documentation by Friday 5pm AEST, 19 April 2024. Each team will make a submission comprising of: Code - based on the template files provided by Berrijam, which would include: - a bash script to install all libraries and packages on Ubuntu flavour of Linux (i.e. apt, bash, pip, etc.). - a single script to train a domain-specific model adhering to the command line template that Berrijam provides. May call custom python packaged installed using - a single script to generate prediction from a model trailed using their and prediction data. May call custom python packaged installed using Models - any pre-trained model(s), or resources, if required, for the solution to work, along with a copy of the license of the resources. 2-minute video Presentation of your team’s solution and presentation slides, if any. Instructions if not already part of install script. Submit your final report by Monday 5 pm AEST, 22 April 2024. Each team must also submit a final report on their solution, algorithms, evaluation and references. The report may not exceed ten (10) pages.
  • What happens to videos and presentations after the competition?
    Berrijam reserves the right to showcase the presentations of the top three finalists on our website and LinkedIn page, serving a learning resource for other participants and promoting future competitions. Additionally, it also provides the winners with an opportunity to highlight their achievements on their personal websites, resumes, or LinkedIn profiles.
  • How do I make a submission?
    Details of the submission will be provided to registered teams by Friday, 29 March 2024. Briefly, each team will make a single submission via a shared folder accessible only by the team. Final code submissions must be made by Friday 5 pm AEST, 19 April 2024. Final report must be submitted by Monday 5 pm AEST, 22 April 2024.
  • What are the key dates?
    Friday, 5 pm AEST, 15 March 2024 - Team Registration and 1-page proposal Submission Monday, 25 March 2024 - Berrijam releases sample data Friday, 29 March 2024 - Berrijam provides registered teams with code templates and link to make submission Friday, 5 pm AEST, 19 April 2024 - Final Code, Video and Presentation due Monday, 5 pm AEST, 22 April 2024 - Final Report due Friday, 26 April 2024 - Winners Announced. Friday, 3 May 2024 - Awards and Presentation (TBC)
  • How many pipelines and models do we create? Do we reuse models across problems from different domains?
    A good analogy is to see the pipeline as a recipe for a cake. If you use vanilla flavour, you make a vanilla cake. If you use chocolate flavour, you make a chocolate cake. Similarly, the algorithms you submit need to be able to train problem-specific models simply by changing the training data. So running the pipeline on dataset for “Is Epic Intro” should produce a model specifically trained to predict “Is Epic Intro”. We do not expect that model to work for any other problem. Given that your solution will be evaluated on the 3 problems for which you have sample data and 2 unseen problems, the algorithm needs to be generic and adaptable. During the evaluation, we will run your pipeline code for each of the 5 problem datasets separately. Each time, given the dataset, it will train a problem-specific model. Then we use that problem-specific model to generate predictions on images for the problem that were not used in training. If your pipeline includes any pre-trained model(s), they will be used each time your code is used to train and predict.
  • In reviewing the dataset for the 'Need Respray' problem, we observed that there are 4 instances labeled 'Yes' and 6 instances labeled 'No'. This distribution does not match the expected setup of 5 positive and 5 negative examples. Is there any mistake?
    Yes, we accidentally gave you extra 'No' samples. Our bad. To make life simpler, we have added additional examples and made 6 position and 6 negative examples available in 'Data - Needs Respray -' from the dataset location.
  • Can I use data from an expired or inactive competition?
    As long as the data license allows for commercial use, you can use data from past competitions or competitions no longer active. Just check out the other FAQ around data and license requirements to make that assessment.
  • For the ‘GenAI‘ problem, there are a pair of real vs generated images for each location. Will the images be evaluated individually or in pairs?
    Each image will be evaluated individually, not in pairs. The pairs are provided as a reference for you to see if you can identify any hints or differences, that might help your approach.
  • What are the data licenses are allowed?
    An important part of data scientists job is to ensure the data they use adheres to licenses and rules for the organisation producing the data. You should ensure that any data sources are available for commercial use, as per the rules of the competition. Common licence types such as Creative Commons or Open Data Commons have options that allow for the data to be used for commercial purposes, sometimes with additional conditions such as attribution or providing the same licence to others. Its important to check each data source for what licence or terms it falls under and to look for additional conditions. Just because a source is described as an open data source does not give you commercial usage rights. For example both UNICEF and the WHO do not allow commercial use of its data without permission (see their terms here and here). Private data collections also sometimes restrict commercial use, such as on Glassdoor. On the other hand, you can use the data from sources such as, or If there is no licence, then you won’t be able to use the data source. As a general guide, avoid licences marked as ‘NC’ or non-commercial. All other Creative Commons or Open Data licences generally allow commercial use. A useful description of common data licences can be found here: Remember that each submission that uses external data must include the data and its data license. It is your job to ensure the data used meets the competition requirements.
  • Can I use <some_specific_name> kaggle dataset?
    If you are using a dataset from Kaggle you must read the rules or conditions around the use of the dataset to ensure they allow commercial use. You can do that yourself by reading the rules around that Kaggle competition and reading the data sections. Here are two examples of where the use of the dataset for commercial use is not allowed and so will not be suitable for Berrijam Jams. Example #1: A. Data Access and Use. Unless otherwise restricted under the Competition Specific Rules above, after your acceptance of these Rules, you may access and use the Competition Data for the purposes of the Competition, participation on Kaggle Website forums, academic research and education, and other non-commercial purposes. Example #2: A. Data Access and Use. Unless otherwise restricted under the Competition Specific Rules above, you may access and use the Competition Data for non-commercial purposes only, including for participating in the Competition and on forums, and for academic research and education. As you can see in both cases the dataset cannot be used for commercial uses, and so are not suitable for Berrijam Jam. You can typically find the rules associated with Kaggle datasets/competitions on the 'Rules' tab. Remember, you to look beyond Kaggle. Here is a list of public datasets that you might find useful. Again, always check the rules of the dataset to allow for commercial use.
  • Can I use GPUs?
    Teams can use CPU or GPU. Solutions are expected to run on a Ubuntu machine with a GPU with 8GB RAM, Nvidia Drivers 535.154.05 and CUDA version 12.2. The full re-training and prediction process for each problem should not exceed 10 minutes, so that evaluation across five problems should finish within 60 minutes.
  • Can I use my own data?
    Yes, as long as you meet the following conditions: You include the data in the submission, along with the license of the dataset. The dataset is a commercially friendly license.
  • Can I use a Pre-Trained Vision Model?
    Yes, as long as they need the following conditions: The pre-trained models must be submitted along with the code and run locally. Your code does not call external REST API to access a hosted model. Pre-Trained Models must have a commercially friendly license (e.g. MIT, Apache, BSD) AI models that are exclusive for academic or research purposes are not allowed. A submission that does not meet the following conditions, will be disqualified.
  • Will Berrijam provide data?
    Berrijam will provide sample datasets for three (3) problem domains, with 10 examples each - 5 positive and 5 negative. These problem domains will be represented in the evaluation process, but use different examples from the ones that are shared. Note that the evaluation process will include two (2) additional problem domains that will not be shared. You can assume images in the evaluation dataset will have similar dimension and resolutions.
  • Help! I have a question. How do I contact Berrijam Jam?
    If you have questions, this is what you should do. Review the FAQ here, and see if we have already answered your questions. We might just respond with the same answer and you'll end up waiting unnecessarily. Email - Note that you might not get an answer for 24 hours. We'll do our best, but there might be other things or lots of other questions/emails. Each time we answer a question that is being asked by others we'll also update the FAQ. Ask Prof. Mohammadi and her team for help with anything technical. Keep in mind that we won't tell you how to solve a particular machine-learning problem, or provide a solution that might give one team an advantage over another. It's a friendly competition after all :)
  • What is the Code of Conduct?
    All participants are expected to abide by the following Code of Conduct.
  • How and where do I attend AI - Talks?
    Accessibility matters, therefore you can attend AI Talks: In person at CBRIN Offices - Level 5, 1 Moore Street, Canberra, ACT or Online via video conferencing -
  • Is the event free? Where can I register?
    Although the event is free, we do require people to register so that we can plan the venue and hosting. Please register here
  • Whats the date and time for the next AI Talks?
    AI Talks is held every 2nd Thursday of the month, 12:30 pm - 1:30 pm Australian Eastern Standard Time.
  • Where can I find parking if I choose to attend at the CBRIN Offices?
    For those attending in person, parking is available at the City West Carpark on Allsop Street. There is also limited street parking around the building.
bottom of page