Captain’s log – October 2018

Welcome everyone to a new Captain’s log. This one will cover all the Planar Bridge meetings from August and September. Once again, many meetings and not many recurring topics. But this time we’ll compensate your time with updates on some action items. So let’s get straight to it.

L2 Policy Knowledge

Our main topic today revolves around the expectations in Policy knowledge of any given L2. This discussion came from analyzing the request at every GP, that every HCE infraction and backups should be confirmed by an L3. When the “Check for HCE” policy was implemented, it was advertised as “HCE it’s new and confusing; please check.” Now that it’s older and more refined, that logic seems to no longer apply. So, why does this policy still exists? There seem to be three main reasons for this:

  • The policy is for helping with potential investigations, not because of a distrust that FJs would get the ruling wrong
  • The policy also helps for properly handling potentially contentious customer service issues
  • And, yes, the fact that there’s a large gap between newer PPTQ judges and GP judges, given the “width” of the L2 as it currently exists, and this provides an excellent opportunity for mentoring

Regarding backups, the idea is that in most cases judges know how to rewind, but not necessarily when to do it. By filtering all backups we avoid a significant number of appeals, and also provide mentorship opportunities. As a reminder, L3s are also requested to check with other L3s on these rulings. So, it’s not really a “don’t trust” situation, but a “we have the resources to verify it, let’s seize them”.

The main counterargument was that this discrepancy between stated expectations (as described in the levels definition) and “real expectations” creates even more problems. The same judges are trusted at PPTQs to issue the ruling, but not at GPs. So, judges don’t know for sure if they’re expected to have this kind of knowledge, or not. Many have said the bar is set too low. Do you think the bar is set too low for this average judge?

There’s an argument that this discrepancy can also lead to lax policy knowledge, as L2 judges who no longer are the final authority on this rulings don’t worry about the details as much. Many L2s have stated that they feel this knowledge is neither needed nor tested. This goes directly against the original idea of this verification: that it creates a mentoring opportunity.

Once this discrepancy was determined, the next step was to ask ourselves: How do we improve policy knowledge? Are education tools sufficient? What tools are there? This should be an ongoing discussion, one that leads us to analyze the current material both for an L2 candidate and for an L2 that wants to improve.

Planar Bridge contacted the PCs about this issue, and they came back to us recognizing that this discrepancy actually exists and they’re aware of it. However, this is one of those cases where they feel the need to choose the lesser evil. GPs are a training opportunity for all judges and eliminating the opportunities this check provides would severely damage the quality of the judge program. On the other hand, requiring a policy expert (knowledge and philosophy) for the PPTQs is not possible because there are many more PPTQs than policy experts. Based on this, there are only two ways to fix this discrepancy, raising the requirements to HJ a PPTQ to the point they couldn’t be run; or wasting one of the best mentoring tools we have in the judge program. Therefore, they believe it is better to coexist with this discrepancy than to implement any other solution. It’s also worth pointing out that the judge program is likely facing significant change in the next several months as we adapt to the new organized play structure.


There were many discussions about level maintenance, as two and a half years after  the introduction of NNWO, the maintenance process is still not uniformly implemented. For example, some regions have a maintenance process in progress, while others don’t. Maintenance tests for L2s vary from region to region anywhere from an L2P to policy updates. While maintenance is handled by each RC in their regions, it was widely accepted that the process should be much more uniform across regions, as the current status creates a feeling of unfairness.

It’s important to note that we’re not talking here about the special cases where there are extenuating circumstances that allow individuals to have some requirements waived.

Right now, maintenance mainly depends on how each RC chooses to evaluate it, depending on their available time and how they believe each requisite can be checked. To alleviate some of this issues, it was presented that RCs could convoke L2/3 that are willing to help in the verification process. Another tool that could help is to have the maintenance test be similar to the way it currently works for L3, with no annual test. Instead, with every set release, they are asked to perform a 10 question rules update, with 2 chances to pass it. If they fail both attempts at any of them, then they’re required to pass an L3P.

Another source of variation lies in the way the “review” requisite is evaluated. Some regions check the quality of the reviews, while others don’t. It means that in some cases this is a quantitative requisite, and in others, it becomes a qualitative one. There seems to be a consensus that reviews quality should be checked to some degree, but there was no clear solution in how to assess it.

Again, we contacted the PCs about this issues, and they agree that maintenance must be the same for the entire program. There should be flexibility to adapt to circumstances (like remote areas or other extraordinary circumstances). However, those exceptions can’t apply to every judge from certain regions. They told us they’re going to look into this together with the RCs.

When asked about the L3 Maintenance process and the reasons why the current system remains largely unknown to any L1/2 (for example, the official Blog hasn’t been updated), they said that they still think of the current process as a pilot, one that’s going to undergo more changes in January and the public information will be updated then.


As usual, there were many topics discussed with CFBE, and here’s a summary of the most relevant.

They have noted a decrease in the attendance numbers for the main event, but side events have generally been on the increase, even though they vary a lot.

There was some discussion about compensation, especially about extending the lead rate to more positions, for example, PTQ HJ and TLs. They’re aware of this as a possible change in the future and are currently considering it. Some asked how can sometimes be no difference in compensation between an L1 and an L3. They believe L1s and L3s bring different things to the tournament, but all of them are worth something, for example, L1s can bring lots of new energy, and there’s a lot of value in that. Many L3 who don’t share this approach often only apply for leadership positions through Core.

Some people were concerned about L1s being staffed for Operations tasks only, which means there are fewer opportunities for them to learn. The counter-argument was that, usually, L1s are the only ones willing to cover such roles. It was presented as an option to try to put such L1s at least one of the days in a judging role in side events, as a way to help them earn more experience.


  • When discussing feedback and reviews, at one meeting someone asked: “Did anyone watch someone do something that could have triggered a feedback opportunity?” Despite most people answering yes to this, most of the judges recognized they didn’t actually deliver that feedback, for many different reasons. So, once again we call upon the importance of seizing those opportunities, take a step forward and start a conversation about that interaction. Remember, you don’t need to have a precise idea of how this judge can improve, as imperfect feedback does not equal useless feedback. Also, positive feedback can be as useful as critical feedback, and both are really great at sparking a helpful discussion.
  • Regarding counterfeits, it was discussed that we can’t publish information about how we can discover them, as that gives cheating players too much info on what we look for. A possible idea is to develop a counterfeits workshop that judges could utilize at GPs or conferences.
  • There’s consensus that the mission statement of the Judge program is good, and it relates to more than just the GP judge. But we should publicize it more. Maybe have it be the welcome message on Apps. Nicolette Apraez is going to contact other RCs about including it in the different welcome wagons.

Action items

In talks with the JudgeApps development team, we identified 4 different things that could be improved:

  1. New users aren’t subscribed to [o]fficial announcements
  2. New users aren’t subscribed to new forums they gain access to by default (regional forums, TLC/TLTP, L3)
  3. Existing users aren’t subscribed to new forums they gain access to by default
  4. Existing users aren’t subscribed to official announcements

#1 and #2 are already fixed by a slight change in the registration process. Now, every new user defaults to emails on for official and unofficial announcements, emails off for the other forums. And they are given a list of every forum and their default settings, with the option to change it. There’s a new setting for “new generic forum post” to “email and on-site notifications” so that when they get access to things like regional forums or TLC/TLTP, those notifications default to on. This last one will also apply to exist users, fixing #3 for future forums they gain access to, as they were many judges saying that sometimes they didn’t even know there were new forums they were gaining access to (an example of this being the TLC forum). #4 is still under review. There’s no agreement in whether changing the setting for 22.000 existing accounts is something desirable. What do you think?

Also, we need help. Specifically, we need judges willing to take responsibility for the following action items. If you want to help, please contact us and we’ll talk about how to move forward.

  1. Looking into ways to improve education tools for policy knowledge
  2. Evaluation of current maintenance requirements – are they in a good place?
  3. Determine methods to confirm quality of reviews for L2 maintenance requirement

You might notice that some of the previous paragraphs contain open questions. And that’s on purpose. I invite everyone to give their view in the comments section. Planar Bridge doesn’t end on GP meetings. We want to know the opinion on these topics from the broader audience, so we thank every single suggestion.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

You will not be added to any email lists and we will not distribute your personal information.