I don’t know if this is universal in the corporate world, but for my call center role, we’re often pressured to engage with new software tools even when they aren’t useful by design or have defects that make them useless.
For my role, these aren’t tools that require much of a learning curve so I don’t think there’s an unwillingness to learn muddying the waters. In my opinion, the natural usage statistics say more about the application than the employees that were intended to use it. It seems like the usage data could be used to understand where improvements need to be made on the software side.
Instead, when a low adoption rate is observed for a new application, we’re then pressured to increase adoption. Usage is tracked down to the individual employee level. It becomes our job to use it or provide endless examples to prove that there’s bugs or other issues driving our low usage.
Can you guys help me understand the other side. How have low adoption rates for applications that you’ve worked on been viewed?
submitted by /u/Internet_is_my_bff
[link] [comments]
from Software Development – methodologies, techniques, and tools. Covering Agile, RUP, Waterfall + more! https://ift.tt/I169MT0