I did something like this last year.
I started by conducting developer interviews, and splitting up their flow into:
• Local development (IDEs, developer environments such as VMs etc)
• Review process (commit reviews, back-and-forths, merging)
• Build process (build tooling)
• Validation (QA, unit tests, CI)
• Deployment (not developer-facing, but useful to have)
Then I grouped developers into personas (desktop-product1, web-product1, web-product2, platform, for instance) and filled in the table based on the journey split above.^
The next bit was instrumenting the flow to get metrics such as time spent in a particular phase, etc. which we are still working on. We do have some baselines though.
As for qualitative feedback, we have a few feedback loops - surveys, developer champions, and user interviews. We use some of these to calculate CSAT.
Hope this helps!