Hi All!! Need some direction. I am a PM in the int...
# product-management
m
Hi All!! Need some direction. I am a PM in the internal developer platform team which provides proprietary languages and tools to internal developers who build products for B2B end customers (SaaS model) using these. There are 1000+ engineers who use these tools in the platform to build B2B products. • My team is working on a problem to fetch data from proprietary database in the platform using proprietary languages. (There is an existing solution, but we are trying to revamp it because of various reasons). • The lead engineer and I are trying to figure out design/solution and we have 3 possible approaches for the same. As part of deciding which design approach to choose, one of critical factor is how each solution integrates or fits in with existing features in the overall developer ecosystem. • In the past we have relied on feedback of 3 to 5 principles engineers/architects from the user base assuming they represent the voice of other 1000+ internal product developers. We are not even sure if these architects use/develop products hands-on for most of their time or they do only design work. So, we would like to bring in more data informed approach going forward. • Even though we cannot ask all the 1000+ engineers to A/B test, the plan is to expand to say 10 to 15 engineers to A/B test the design/solution approach. Basic mocks/prototypes for these 3 approaches will be ready soon. The platform does not have any specific A/B tools for us to use. Any ideas how I should approach finding what approach works well for developers rather than just relying on feedback? Thanks!!
j
Hello @Madhur Jain From user research perspective, there is a great book called Testing Business Ideas by David J. Bland. I would look there for possible experiments in terms of Desirability to extend your interviews with more possibilities to get proper feedback. That could help with the design part. However. If it's mostly about making a technical decision on the Platform architecture, then I'm not 100% sure if it should be purely based on users voice.
c
I think the most important consideration here is whenever or not you’re going to be breaking compatibility for your consumers. Are the changes mostly to UX or do they represent a new paradigm which would require training and potentially create orphaned work if you retreat (i.e. A/B testing which doesn’t lead anywhere). If it’s just UX changes then I’d suggest looking into some 3rd party feature flag/ rollout tools which will allow you to do large scale testing on random samples of users. Either way you’re going to need to develop some metrics with which to measure success against your objectives. What is the goal in making these changes in the first place? Before you change anything for anyone know what your goal is and make sure you know how to measure success.
m
To provide more context, we currently have proprietary low code user interface to write code which fetches data from proprietary DB but there are concerns with the existing solution. The goal is to enable writing code much faster (specifically which fetches data from the DB i.e. improve productivity+satisfaction) e.g. The options are 1. Enhance the existing user interface and make it simpler 2. Enable text based solution e.g. enable writing a SQL directly 3. Mix of the above two approaches. Because of complexity, each option could take 6 - 9 months to build and rollout. How does my team figures out which options will be the best going forward? Can this be A/B tested and get some data or shd I just rely on giving he prototype to the users and get their qualitative feedback.
g
Solution validation techniques have their purposes very clearly defined as well as conditions in which can be applied. Make sure conditions and purpose for your validation are clear before using specific technique otherwise your results might be extremely incorrect. A/B testing is used to test one variable at a time and requires feedback from large group, this is statistical method, which means that you need to have a probe statistically significant to have good results and 15 engineers is not. Look here for more about this method https://amplitude.com/blog/ab-testing Based on your description, qualitative feedback fits much better to validation that you need to run - there's many aspects of these solutions that you need to get feedback on and exactly understand why tester choose one solution over other. Another aspect of running any validation is that you should try to define personas. When you build solution for 1000+ engineers (with a wide soectrum of skillset) but talk to 5 principles who don't code that's a giant mismatch. Eventually you will build a solution but this is going to work like 5 inches 4k screen for person with sight issues
c
You might already have all this, but the first step for me would be something other than A/B testing. It would be finding out which personas I have on my platform. 1. Do I have actual coders - they will not be happy with any low-code solution that doesn’t provide an API for them to integrate with their handcrafted applications. How many are there (% and actual numbers)? Do they actually use the data behind the low-code solution? How? How should they, in an optimal future? How would they benefit from your options - or are you lacking an option? 2. Do I have citizen developers? How technologically apt are they? How many are they (% and hard numbers)? How would they benefit from each of your options? 3. … you will most probably have more… If you have your personas, it will be easy to select a handful of each persona for a user interview - something you can do without any implementations, which would delay and cost you every time you need to go this route for a new feature. Narrow down the options as good as possible - you might be able to single out something here already. THEN proceed to A/B testing with your CURRENT and your NEW solution and measure the actual improvement and outcomes you achieved - with the least amount of prototypes possible.