Stop being a “Feature Factory”, you can’t afford it! Shift to Product-Led SaaS with an easy-to-use Feature ROI check

The startup scene in 2022 and 2023 has been tough. This “Startup Winter” has seen less funding, many layoffs, and startups closing down. The hopeful times of 2020 and 2021, driven by big investments, feel far away now. Add to this the global events and banking decisions, and we’re in a tricky spot.

I’ve helped startups for 10 years, and one thing is clear: now isn’t the time to just keep making new features. Look at fintech and other SaaS areas. A startup that keeps adding features for every small need is very different from one that focuses on the big needs and does them really well. In these tough times, guess which one stands strong (spoiler: neither, but second is more sustainable)?

The Problem with Just Making More Features

Recently, many SaaS businesses, including fintech, became feature factories. They kept making new features, but many of these weren’t even used or took too long to check. The idea was: “more features = more value.” But smart tech leaders know this isn’t always true.

Tech Leaders often fall into the trap to make your engineering work better, deliver faster. But, it’s a problem if you only improve one department and miss the big picture (which is called Local Optimisation in Systems Thinking). It’s like giving a car a new paint job while not changing the oil inside the engine. It might look good for a bit, but it won’t last.

Using Product Leadership to Guide Engineering

Being smart with money doesn’t mean cutting corners or outsourcing all the time. It means using your resources, like engineering, in the best way. This means focusing on big product goals, not just single features. And aligning of distributed teams with unified goals is super important, because as you roam over the globe when collaborating – context and end value is easily lost in communication (cultural, async, indirect).

I’ve worked with tech leadership within the last couple years to move in this direction. We’ve come up with simple tools to check how features are doing and if they’re worth the money spent. Here are two main tools:

1. Feature ROI Check: This tool helps you see how your features are doing out in the real world. For each feature, you can see:

  • How much you spent to make it.
  • When it was released.
  • How customers interact with it.
  • How many new customers it brings in.
  • If it helps sell more.
  • If it impacts your yearly income.
  • How many users are using it and how they use it.

Imagine you’re deliberating on introducing a new feature to provide Automatic Fraud Evaluation for new Contracts. Sure, it sounds fantastic on paper. However, what if, after the feature’s deployment, the adoption rate is significantly lower than the projected number? Wouldn’t it be prudent to have these insights sooner, to pivot or optimize accordingly?

With this tool and information, you can see which features are really helping your business. The simplicity of this approach removes complexities, such as ARR Projections usually gaining traction within a year+ (especially for enterprise clients). However, nothing prevents you from keeping track of ROI so far, Quarter-on-Quarter.

2. Engineering Cost Check: This calculator provides a detailed breakdown of the investments behind each feature. It evaluates: (1) The upfront technical investment, which encompasses development hours, resources used, and any third-party tools or services acquired specifically for the feature. (2) Time, measured by the date of the feature’s release and the total duration from inception to launch. (3) Resource allocation, including both human and technological resources. (4) Effort, which includes dev-days, and any additional support and enablement to bring the feature to fruition. I’m using industry-standard ratio of QAs, DevOps, Managers per Developers. But your company may have it different, so feel free to change that!

The logic of this calculator is to combine these factors, offering an overview of the total cost to develop a feature. This calculated total then is used to evaluate the ROI by comparing it against the feature’s revenue generation or other success metrics.

Click here to navigate to Simple ROI Calculator, and Engineering Cost Calculator. Each field comes with the note and explanation, to help you. This would give you solid data to talk to your team and make smart choices. For example, if a feature to make online payments easier has low interest after some time, you need to think about why.

Moving Forward

Becoming product-led is tough. Your product and sales teams might not like the changes, and may not be aligned with Product Vision. Top-level leaders might have different ideas and steer the wheel (or OKRs) into different directions. But it’s important to think beyond just making more features. For startups, especially in changing areas like fintech, every decision must make sense. Every feature should have a clear purpose, and all decisions should be part of a big plan.

As we hope for better days after the Startup Winter, let’s be smart. Not just by saving pennies but by making smart choices with what we have and lead with solid products, backed by real data and a clear goal.

All company names are coincidal.

#okr #featurefactory #productops #mbr #qbr #startups #fintech #saas #venture #vc #winter #budgeting #roi #engineeringcost

5 essential NOC Metrics to reach high uptime and detect potential outages

My latest tenure of 2.5 years is closely related to Designing and Adopting Incident Management Framework (as part of Program Management org). This activity was driven with two primary objectives in mind:

  • Reach and maintain system uptime of 99.99% (our APIs and SDKs).
  • Ensure engineering is always firsthand source of information for any potential outage that can result in  downtime.

In our foundational days, we lacked a comprehensive alerting and monitoring system. Establishing the Network Operations Center (NOC) Team was our strategic move to shape a robust system and take charge of Incident Management. We not only touched the 99.98% uptime benchmark but also heightened our proactivity from spotting 60% of incidents ahead of our merchants to a resounding 95% and higher.

This post is about the core set of metrics for the Network Operations Center team, with its relation to Incident Management Process. Together with common anti patterns, and measures for improvement.

Metrics that Steered Our Success and Measures to Improve Them

1. First Time to Respond

  • Context: Rapid response times can make or break product reliability.
  • Industry Standard: 10-15 minutes.
  • Our Vector: An ambitious SLA of 1 minute.
  • Antipatterns: Over-optimizing can stretch the NOC team thin. Sometimes, it’s wiser to slightly breach SLA and strategize better future responses.
  • Impact: A delayed response can seriously impair the product’s dependability.
  • Measures for Improvement: Regularly refining our alert sources. The optimal range is 3-5 sources. This involves identifying system bottlenecks, monitoring typical patterns around these, and continuously refining our alerting mechanism to reduce false positives and consolidate dashboards.

2. Time to Acknowledge

  • Context: The initial acknowledgment sets the path for problem assessment and repair strategies.
  • Industry Standard: 10-15 minutes.
  • Our Vector: A 3-minute SLA.
  • Impact: The acknowledgment speed directly correlates with user trust.
  • Measures for Improvement: Similar to the first metric, we focused on refining our sources of alerts, ensuring that the NOC team isn’t overwhelmed with too many data points.

3. Time to Assemble

  • Context: Quick and appropriate team assembly means faster problem-solving.
  • Industry Standard: 30-45 minutes.
  • Our Vector: A 15-minute SLA.
  • Antipatterns: Summoning any team, rather than the right one, can be detrimental.
  • Impact: Swift and relevant team assembly leads to efficient problem resolution.
  • Measures for Improvement: Establish clear escalation paths and alert tags. Automation, using tools like PagerDuty with Jira, is essential once alerts have clear ownership and false positives are minimized; Regular training and drills to ensure the team is always prepared. Involving the team in decision-making processes also provides a fresh perspective on the framework.
The Avengers GIF - Find & Share on GIPHY

4. Proactive Engineering Detection Rate

  • Context: Understanding issues even before they manifest as incidents ensures a platform’s reliability.
  • Our Metric: The percentage of times engineering identified potential issues before they became incidents, against those reported externally.
  • Patterns & Impact: A low percentage (<80% for downtime-related incidents) indicates a reactive approach. High proactiveness, as evidenced by our journey assured platform reliability.
  • Measures for Improvement: Fine-tuning alerting and monitoring, and maintaining transparency and feedback loops with customer-facing teams.

6. Number of Critical False Positives

  • Context: False positives can drain the productivity and morale of the NOC team. They detract from real issues and can potentially desensitize the team to genuine threats.
  • Our Metric: At the outset, we grappled with an astounding 40% of critical alerts being false positives. Our relentless push brought this down to a mere 5%.
  • Antipatterns: Over-alerting can spread the NOC team too thin, with a risk of missing a genuine alert amid the noise.
  • Impact: Lowering the false positive rate paves the way for scalable and effective automation. A high rate can not only impede automation but also compromise the quality of incident responses. The alert fatigue can cost fintech platforms like ours, similar to Stripe, Plaid, or Square, dearly in terms of both platform reliability and team morale. False positives in alerting might seem innocuous, but they can slowly erode the efficiency of your response mechanism. A disciplined and data-driven approach, much like the one we practiced, can turn this around. It’s not just about the quantity of alerts but the quality, ensuring each alert is actionable, relevant, and steers the platform away from potential disruptions.
  • Measures for Improvement: We embraced a weekly rigorous analysis of all alerts and escalations. Each stage of the alert funnel was scrutinized to ensure that every alert served a genuine, preventative purpose against potential incidents. This consistent refinement not only brought down false positives but also refined our entire incident management strategy.

Conclusion

Metrics are more than mere numbers; they’re the compass guiding our path to excellence. Within Fintech-domain, serving millions of users these metrics and our proactive steps have been instrumental in delivering a platform that users trust implicitly. Building isn’t just enough; it’s about crafting with insight, dedication, and continuous learning.

#okr #incident #framework #noc #networkoperationscenter #uptime #outage #alerting #monitoring #management #programmanagement #devops

Navigating OKR Challenges: Common Pitfalls and Agile Solutions

Over the past 2 years, I’ve been working as Program Management lead at Metamap.com, helping to set up OKR framework among other things. As challenging as just working with OKRs is, we are a distributed team, spanning from PH / SG to the very west coast and even Hawaii 🙂

The common issues we’ve found and discovered led me to creating this blog post. So without further ado, It’s time to delve into the captivating world of OKRs – Objectives and Key Results. These are the guiding stars that lead your organization to its true north. Embarking on this journey can sometimes make you feel like you’re treading choppy waters. In today’s post, we’ll chart a map of these common pitfalls and equip you with an Agile compass to help navigate with confidence.

OKRs: Refresher Just to ensure we’re all aboard the same ship, let’s quickly recall that OKRs comprise two main elements: Objectives – your ambitious goals, and Key Results – concrete, measurable steps to reach those goals. Sounds simple, but as anyone who’s navigated a maze will tell you, the reality can be trickier.

1. Lack of Understanding and Training: Picture this: A soccer team trying to score a goal, but half the players think they’re playing basketball. The result? Chaos, and certainly no goals scored. An unfamiliarity with OKRs can lead to a similar mismatch in your organization.

Solution:

  • Organize detailed training sessions explaining the OKR framework.
  • Explain the difference between Objectives (qualitative goals) and Key Results (quantitative measures).
  • Bring or be an OKR coach to guide your team, explaining how to formulate effective OKRs.
  • Conduct workshops with hands-on exercises for creating and aligning OKRs.
  • Share examples of successful OKRs from other organizations for reference.

2. Misalignment of OKRs: Imagine a choir where each member sings a different song. The result? A far cry from harmony. Misaligned OKRs can create a similar dissonance in your organization.

Solution:

  • Use Scrum Events or Agile ceremonies like sprint planning and retrospectives to align OKRs.
  • During sprint planning, ensure each team’s OKRs align with the company’s main objectives.
  • In sprint Reviews, review OKR performance and realign as necessary.
  • Encourage cross-departmental communication to avoid working in silos. Try having multiple-team Sprint Review, where each measures their contribution towards the Key Result!

3. Setting Unrealistic Key Results: We all love superheroes, but expecting to fly like Superman is unrealistic (unless you’re wearing a VR headset!). Similarly, Key Results that aim for the moon without a rocket can leave teams feeling disheartened.

Solution:

  • Use Agile’s principle of incremental progress.
  • Set smaller, achievable Key Results aligned with each sprint goal.
  • Measure progress after each sprint, adjusting targets as necessary.
  • Ensure teams have the necessary resources and support to meet their Key Results.

4. Overcomplicating OKRs: Ever tried to solve a Rubik’s cube while bouncing on a pogo stick? It’s overwhelming! Similarly, complex or excessive OKRs can feel like juggling flaming torches.

Solution:

  • Keep OKRs simple and lean, following the Agile spirit.
  • Limit the number of OKRs for each team to keep focus sharp.
  • Make sure each Key Result is specific, measurable, and time-bound.
  • Review and simplify OKRs regularly to ensure they remain manageable and meaningful.

5. Poor Communication and Transparency: Imagine playing a game of telephone with a 5-meter long tin can string. The message is going to get a little garbled, right? Poor communication can lead to similarly distorted OKRs.

Solution:

  • Leverage Agile communication practices such as daily stand-ups and sprint reviews.
  • Use these platforms to discuss OKR progress and address any issues.
  • Maintain a transparent OKR dashboard where everyone can see each team’s Objectives and Key Results.
  • Ensure leaders actively participate in OKR discussions, providing clarity and encouragement.

There you have it! Remember, every journey encounters a few storms, but with the right Agile compass, your OKR ship can weather any challenge. Keep sailing, and soon your organization will shine like a polished gem!

I’d would love to hear about your OKR journey. Have you faced any of these challenges? How did you steer your ship back on course? Share your experiences in the comments section!

Until our next adventure, keep your sails high, and navigate with confidence! Happy OKRing, folks!

My experience in preparing to PSM II (Professional Scrum Master) certification

Hey everyone, here’s my list of resources and literature for getting prepped for the PSM II examination. I would be tremendously happy if you share yours 🙂

Sidenote: if you are “certificates are overvalued” type of person – I’d agree. This is especially true when it comes to CSM / PSM I – because that certification only mentions that you have been introduced to the basics. However, when it comes to PSM II you need to rely on your experience as a Scrum Master. No more “shu” (of shu-ha-ri), just your experience and daily understanding of agile values. 

By the time I write this post, there are 6793 PSM II holders.

image

Preparation for PSM II

As Denis @ Agile Expat wrote in his blog in russian, it would be a great starting point to pass scrum.org open assessments at 100% before getting to PSM II exam:

  • PSPO open – because there are questions related to an understanding on how to coach PO and how to work with value. (might I say that you can easily pass PSPO exam in case you’ll pass PSM II)
  • PSM I – since you MUST know everything in it by heart. Values, roles, events, artifacts. 
  • PAL-E – so that through coaching you could understand higher management, metrics, organizational maturity.
  • PSK open – foundations of working with the flow.
  • Nexus open – foundations of official scrum.org scaling solution, although not the most popular one.

Other tests should be treated with the grain of salt. The internets provide you with a wide variety of preparational test suites: some of them are more oriented for passing PMI-ACP (and are overall PMBoK-skewed); other ones should be avoided at all costs as they mutilate the very basic principles of scrum and sabotage your preparation – stay aware.

Coaching, books, training

Lyssa Adkins: Coaching Agile Teams – rather easy-to-read and universal cookbook for Scrum Master’s stances of an agile coach, facilitator, teacher. It perfectly complements your personal experience.

There’s a small section on conflicts, which I find useful (in case you don’t want to dive deep into the science of conflicts). By the way, the book in my experience is greatly enhanced by ICAgile: ATF (Agile Team Facilitator), ICAgile ACC (Certified Agile Coaching) training courses.

Continue reading

Trust Stories – Growing trust in distributed teams (RU)

Last week me and Alex Pikulev of Agilix recorded Trust Stories podcast episode, on Growing Team Trust in Distributed Teams. Courtesy of In Teams we Trust website

Main bullets are: 
– Growing trust in distrubited teams is hard, but nevertheless as important as in co-located team. 
– Team itself creates an atmosphere of trust in itself. Our task, as a scrum master, coach, manager – is to help and highlight needed areas.
– XP and especially Pair Programmingи helps in growing trust.
– Intrateam trust, from the informal side (skype beers, navigating as a guest to your colleagues in other locations, bike fixing via the webcam, ordering stuff on flea market in your city and sending it over to a colleague) helps a lot.
– More freedom for the team, for collaborative work and motivation. More trust! Work harder on understanding the context and value of features implemented.

More details (rus) / tg channel : https://t.me/inteamwetrust_rus/35
Video: https://youtu.be/Pq82aFap1rc

How to turn off New Jira Issue view

  1. Navigate to Personal Settings
  2. Turn switcher for Jira Labs off
  3. And leave feedback. Atlassian team needs you to help’em understand what didn’t you like. Ahem, I got some help 🙂
    1. It’s not comfortable to edit
    2. It doesn’t support markup
    3. It doesn’t allow to work with resolutions
    4. It makes it hard to find needed fields, although they are already turned on in standard view

Are you finding new view comfortable? Has it helped you improve your Jira routines?

Conducting Remote and Distributed Retrospectives with Trello

and why Trello?

Lately I’ve started using Trello as an ultimate tool for the Retros and Demos. So this post will cover the path to using trello as opposed to other solutions.

yet another pencil illustration

Tools

I’ve been using multiple tools, such as Realtimeboard (now Miro) as an interactive flipchart to collaborate with the team, Google Docs with sections appointed to the retro stages, Confluence (as in 100% of the projects I’ve been working in we’ve had Atlassian stack), even Jira once (wow that was a bad idea)! 

I bet almost everyone was trying to find that nieche, that ultimate tool he can expand to using in various projects no matter the area! 

The main criterias for the proper tool are: 

  1. handling item-centric discussion -> cards, ideally.
  2. fast and effortless collaboration of the cards
  3. markering the cards (by all members), no matter label or color
  4. cards reordering
  5. time spent on executing action of: creating section -> adding item -> labelling it to be worth (by members) -> adding comment to the item.
Realtimeboard

While Realtimeboard is awesome, it’s not as simple to use in collaboration (well, it exceeds in functionality but format of retro is tied to the cards which is not the strongest of that product’s sides). There’s no barebone structure that supports cards, so you have to maintain it yourself: create some kind of a column, move items that are not self-aligned to that column. This is time-consuming and effortless. Labelling is not a stronger side of Realtime board.

Google Docs

When it came to Google Docs, it’s the default option for zillions of companies I’ve discussed remote retros with. However, it’s not visual enough from the point of dissecting retro items and splitting them. Using spreadsheets on the other hand seems to cope better with 2-dimentional-retro-approach (well, not an approach but the idea that you got buckets with items for good/bad/improvements). However drag-n-drop for the items to reorder and link with each other sucks there. I’ve also tried to have Google Slides at a certain point with one-slide-per-section (e.g. all great improvements accomplished since last sprint) – but it seemed too heavyweight and kind of sucked at limiting members of the dev team to collaborate properly. Labelling here is somewhat ok, but

Confluence

Coming to confluence, albeit it does have a blueprint for the retro – it’s good more for the documenting / stenographing, than for the real-time-discussion. Or just writing some kind of decision-log. Atlassian tries to position confluence as a lightweight in-stack solution for collaboration, however it’s far away from gDocs in terms of simplicity/stability/collaborating. And again, it’s not centered around discussion items. It’s also not stable enough, where some of the changes are not applied on publishing, or connection is lost to cloud instance. 

Trello

Coming to Trello, it simply supports the cards, it allows voting either via power-ups (which is simple), or via labelling with colors (which is fast, efficient and convenient). You can drag-n-drop item cards, and organize Retro stages into column. If the item may have a lengthy discussion or not that related -> you just drop it into the parking lot. Basically, trello is the most simple-to-use online implementation of flipchart + stick-it-notes.

Preparation & Setting the Stage

Remote retros usually are much less emotional and empathetic, given that it all happens online and some people may not want to share their faces (and expressions) behind the camera. Now to set the stage, we’d ideally need to:

  • Get everyone to turn on the cameras at their laptops
  • Select comfortable tool. I usually use either Skype or Slack video, but occasionally zoom seems to be a great option as well
  • Make sure the quality of connection is superb: we need as less lags possible
  • Prepare beforehand: either with the unified agenda, or the topics. We can preliminary pinpoint any inconsistensies and disfunctions on an online board 🙂 All members of the retro should know the structure of how the retro will be proceeding, in order to assign points they’ve prepared to particular retro stages.

Retro

example of how trello board is used during screen sharing on a retro
Things improved since the last sprint

I usually start retros by listing our improvement/accomplishments that we’ve planned to achieve last time. So we either mark something achieved with green (we got a long list of all improvements implemented), or mark with orange something critical that we didn’t achieve but planned (and with red if it was not improved for 2 sprints in a row). Later on this red labelled card simply is the top-priority to improve (if still relevant).

This shows the team where we at with desired improvements and is a good starting point for overall recap of things that happened during the sprint before the current one.

Tip: it’s also sometimes nice to order everyone a pizza for the retro to get the positive vibe and thank for accomplishments. It shouldn’t be only on org budget, the team can self-organize around retro being a cheerful and friendly event, instead of a mandatory meeting. Although, don’t force it into the “mandatory-pizza-meeting”, with the management looking from above and yelling: “Eat your food and report on bad things happened during this sprint”. I’ve seen some orgs giving the budget for pizza and overwatching that it’s spent properly (eaten) and making sure people are thankful that management is spending money on their food 🙂

This stage may get lengthy as if something planned to be improved is not achieved -> team may start getting in lengthy discussions on why this happened. As a facilitator, your job is to help team find the productive path to navigating to the root cause in a short enough time to accomodate retro timebox. That’s why only a few items (1-2) should be planned for improvement, otherwise we may be stuck on the very fist stage. Your job as a scrum master is to coach the team to be aware of the timebox and get to the root cause efficiently.

Sprint Metrics

Sprint metrics is an important internal-SLA for the team. Usually there are various factors that dev team may see as an obstacle or an impediment to be an even greater power-ranger-squad. Facilitation and proper reflection of dev team’s discussion provides sufficient items on how to improve the process and measure those improvements. The rest is just comparison. Common metrics to compare are: Lead Time (as soon as you explain the team the meaning of it, team will start to be motivated to improve this metric), time in Code Review, # of times tickets are reopened, and so on.

For bigger projects we’re also reviewing the metrics until the project-end, comparing projections on finishing via story points, and throughput.

Sprint Goals

This is not something that I use everywhere, but still when it comes to transparency, we need to reiterate what we tried to achieve goal-wise. Although this is not directly related to the process itself, since goals achieved need to be reviewed and discussed during Sprint Review -> it’s still effective to hightlight reasons on meeting sprint goals or not (which are related to the process, and retro is about inspecting the inner processes and tuning them).

I usually guide the teams to mark sprint goal cards with green for achieved, and red for not. And comments to demonstrate the reasons. Simple as that 🙂

Typical ‘What have been working nice’, ‘What could have been better’, ‘What will help us improve in the future’
btw, did you know that there’s a ACP-ATF (team facilitator badge by ICAgile)

Distributed and remote team members need to add points and vote for them as soon as the issues are found. No need to wait until the retro itself, to pin discussion item.

As a facilitator, your job is to turn team’s attention / highlight any conflicts / impediments during the sprint when the team faces them. Help the team to document / pin it to the retro board.

Do it via reflecting the situation when discussing it with the team, providing a view from a person that doesn’t have a context, or any other facilitation technique 🙂 Make sure team is engaged in inspection process during the cadence itself, and not during the retro event only. And help the team to document / pin it to the retro board. Even if it would lead to a lot of items in retro – you can always remove irrelevant.

References and helpful things

  • Ben Linders has a pretty great trello board that provides crowdsourced ways of retro-handling https://www.benlinders.com/news/trello-board-retrospective-techniques/ That possibly was the best help I got when trying to make team retros in trello better 🙂 He’s a nice guy in person, you can clarify a lot if you’re at the same conference / workshop as him!

Jira Cloud: Releasing multi-project old tickets from Kanban board without spamming developer’s inboxes

This is an interesting case I always wanted to make better: almost every project you come to has a lot of older unreleased tickets, that actually already sit on production. And developers (without proper jira management) continue using the Kanban board that becomes more crowded in the Done / Closed column (and it can hit 400, 1000 tickets and be slow and almost pointless to use). Typical story, huh?

So what to do, if you want to release all those older tickets and don’t bother developers with, say, 450 updates on every ticket that fixVersion has been set to each one of them? The answer (and thanks to AUG Moscow Community) is to swap notification scheme for related projects while releasing.

References: https://confluence.atlassian.com/adminjiraserver071/creating-a-notification-scheme-802592629.html

Steps

  1. Go to Jira Settings -> Issues -> Notification Schemes
usually there’s one by default only

2. Create a blank scheme (as in the screenshot). It means that the events (like in my case it is ‘Issue Updated’ would not send any notifications).

3. Now go to the Project Settings -> Notification Schemes. Swap the notification scheme for the blank one in the Project Settings

4. Take a look at notification schemes in jira settings (it should not include one project attached to the blank scheme.

5. And finally release the Done/ Closed tickets on the Kanban board -> voila, no email notifications were sent at all.

You’ll still get the events themselves logged in Jira Settings -> System -> Audit log, which is neat and nice, cause everyone will be able to relate to that if needed!

Other solutions

Might include turning off outgoing emails, but after turning them on events will build up a queue of notification that missed scheduled sendout 🙂 So don’t do this.

Anatomy of Distributed Team: Workflows, Agility, Communication – my talk from Atlassian Summit 2018

It’s been an extremely fruitful Atlassian Summit 2018 in Barcelona! Main purpose was to speak about Distributed Teams, however the atmosphere was so cheerful and friendly, that it was more like a fireside chat, even given that my speech was the closing one.

Got an awesome tracklead! Brought a giant bag of Atlassian merch for AUG events in Ufa, thanks Darlene! Finally met Ben Linders (check out his Agile Self-Assesment game!) whom I’ve been giving a Q&A to (which is now available in Japanese and Chinese).

Talked with zillions of new people from Atlassian Marketplace, Bitbucket, DevOps, Jira Cloud, Adaptivist, Code Barrel, and loads of other Atlassian-related-folks (including long speeches with Mike Cannon-Brookes (WHOA!) 🙂 Met AUG Leaders from all-over-the-world, and the whole atmosphere was sheer cozy and welcoming (you guys were awesome)!

Sources for presentations: Summit_Distirbuted_Teams_v1.0Agile Communication in Distributed Teams (with no overlapping hours)Workflow for the Requirements in the Distributed team

Atlassian AUG Leaders