Castro Podcast Player https://castro.fm/ Castro makes it easy to manage lots of podcasts and enjoy the best episodes of all your favorite podcasts. 2025-02-27T14:41:58-05:00 State of the App: Year 1 2025-02-15T00:00:00-05:00 2025-02-15T21:55:15-05:00 Dustin Bluck https://castro.fm/blog/state-of-the-app-year-1 <![CDATA[

Merch

As a celebration of one year with Castro, we are opening a merch store. We tried a few stores and this was the best combination of quality products and a good selection. As always, all proceeds directly fund development of the app. Do not feel pressured to buy anything, I’d much rather you buy an annual subscription of Castro for a friend than a T-shirt, but we do get occasional requests for merch or about other ways to support us, so it’s there if you want it. Email support if you have any questions or issues with the site.

Castro After 1 Year

Overall I’m happy with Castro’s improvements over the past year, but we need to move iOS product out the door faster. That’s my number one focus and hopefully you will start to see those efforts bear fruit soon. Until then, I thought I’d share some of the work we’ve done that isn’t as easy for you to see but does matter a great deal.

Some might have expected a reflection on the state of the podcast industry in 2025 or Castro's place in the universe, but we have work to do. This is a post about results and what your subscription dollars are buying. Faster feed updates and a more reliable, performant podcast app.

Crash Rate

Castro Crash Rate
Crashes / Sessions over time

A year ago, the app crashed 1 out of every 58 sessions. Today it’s more like 1 out of every 750. We still want to improve this but the number is quite reasonable, going from the worst performing quartile according to iOS benchmarks to the second best. I obviously won’t be satisfied until we’re in the top bracket. If you expand out the chart above, it’s really the best Castro’s crash rate has ever been aside from a brief period in 2018.

If you’re wondering what we actually fixed here, there was no magic bullet.

  • We rewrote CarPlay, as the old version built on MPContentItem seemed to have more and more issues on newer devices
  • We rearchitected how Castro’s artwork caching works and removed some redundancies, allowing us to support episode artwork and lots of artwork in explore while improving performance and reliability
  • We are building out a new data layer built on structured concurrency as well as a new networking layer, and this bypasses various threading issues and data races you might have seen in the old code
  • We stayed on top of issues that crop up when we link to newer versions of iOS, so the number continues to go down instead of creeping back up

Also I’m obligated to say, as someone with a background in mobile app reliability, sessions/crash is not really the best way to parse crash data, but it’s the number Apple gives us and the fairest comparison across apps.

Outages

It was a touchy subject in the past, but there have been no sustained service outages since we took ownership. There was really only one unplanned outage of any consequence, when our cloud provider spontaneously rebooted our database server and no team members were around. I believe it lasted an hour or so before everything came back. Many apologies for that.

Nobody running a web service can tell you it will never go down, but I feel good about our track record. We will continue to do our best to avoid issues and communicate proactively when they occur. I feel especially good about this in light of the number of changes we have made to the infrastructure, enumerated below. It’s an issue I have taken incredibly seriously and I really intend for Castro to be associated with excellent service and reliability, not outages.

Backend Updates

I've compiled a non-exhaustive list of features and upgrades added to Castro’s backend, as my focus really has been on creating a sustainable infrastructure that will be around for a long time and allow us to iterate quickly and safely.

One thing to call out if you’re going to skip the list below, is that this weekend we once again dialed up our feed updates. If Castro’s feed updates aren’t already the fastest in the industry, they will be soon. Also, if you miss an episode on Castro today, it’s almost certain that the backend updated just fine but the event processing failed on the client. This is an issue I have been very focused on and expect to eliminate entirely in the next iOS release.

Our backend game was indeed, upped

The List:

  • Updated to Postgres 16 from 11 and moved to a more performant database server
    • Likely our most impactful fix of the year, leading to a dramatic improvement in pretty much everything
  • All backend servers redeployed and updated to new images with Debian 12. Some of these had not been touched in several years.
  • Updated to ElasticSearch 7 and built out a new search cluster, we are now indexing episodes and will soon be indexing transcripts
  • Larger, more performant Redis servers, updated Redis
  • Updated to Rails 7.2 from (5.2?)
    • We’re also using the latest version of Sidekiq Pro and other relevant dependencies
  • Ruby upgraded from 2.5 to 3.3, including YJIT support
    • Of particular note was MALLOC_ARENA_MAX: 2, which was an incredible improvement in worker efficiency
  • Rolled out self-serve ads
  • Faster feed updates
    • A big issue Castro had in the past was that unpopular* feeds could fall hours behind, because they’d be prioritized behind updates to popular feeds and effectively be starved out. Sometimes, every popular feed would update dozens of times before the unpopular ones.
    • This now works more like a FIFO queue and Castro’s feed update mechanisms are responsive to load, so when the server gets behind, we start updating every active* feed slightly less often but still in order.
    • Just this past weekend we again cut the time between feed updates. (All numbers in this post are from before that improvement, so things are actually better than written here)
    • It’s a little hard to quantify performance before acquisition, because performance was obviously degraded in January of 2024, but *best* case scenario for an unpopular feed was updating every hour when we took over.
    • Now, the *worst* case scenario is more like a half hour but 90%+ of the time every active feed on Castro is updated at least every 8-10 minutes, often faster, with popular feeds usually within 4-5 minutes.
    • From 43 days to 2 hours to 8 minutes
    • Private and popular feeds update have seen marked improvements, but unpopular feeds were the real issue before and that issue is entirely gone.
    • **For purposes of terminology above, “active” means someone on Castro subscribes to the feed, “unpopular” feeds are active but only have a handful of listeners, the exact number has changed several times and will likely be tweaked more.
    • More info on one of many feed improvements
  • Brand new explore data and APIs, brand new charts system, category charts, episode charts, etc
    • Automated locale support in charts, good explore content in every country, added many locales, etc
  • podcast:transcript support, parser supporting all 4 major transcripts, as well as XML transcripts and more
  • Enriched podcast data
    • Updates to podcast model
      • language / locale support, canonical links, etc
    • Updates to episode model
      • Added episode artwork (showing in app)
      • Added episode and season data (showing in app)
      • Added individual episode author (not showing in app quite yet)
    • Normalizing iTunes categories across feeds
  • Better Canonicalizing
    • Added support for canonical self links, itunes_new_feed_url, etc.
    • Rebuilt feed canonicalization from the ground up incorporating redirects, itunes, self links, Uri normalization and more.
    • We’re seeing fewer dead search results, fewer duplicates in search, and by combining feeds we save on infrastructure costs
    • There is always more to do here but really happy with the strides we’ve made
  • Cutover to profiles
    • With the old account system, many things were ranked according to how many accounts had ever subscribed to them. This made the power laws of podcasts even worse than they actually are, skewing search results and charts. Since we’ve cut over to profiles, you’ll notice this effect is less pronounced and the search results and charts are more reasonable.
    • Profiles allow access to subscriptions across devices, something you will hear more about soon
    • I'm very proud we were able to do this with no disruption in service.
  • WebSub support
    • 17% of active feeds have updated at least once using WebSub, making your feed updates instant and saving extra work on our infrastructure
  • Improvements around identifying private feeds
Getting better all the time

All this stuff adds up. Event queries, our most common endpoint by far, had a mean-90 of 40-60ms in the past, right now that number is < 15ms and literally getting better on a weekly basis. Many of our endpoints have seen even more substantial improvements.

Support

One more thing to call out is our email support is really good. An actual person with deep knowledge of the app reads every email, helping where we can and telling you when we cannot. We try to answer everything, but if we haven’t gotten back to you ping us again in a few days and we’ll get there. We also do our best to follow up again once we’ve fixed something, even if it’s months down the road.

Thanks

Thanks for reading and for using Castro. If you like what we’re doing, please buy a subscription and tell your friends or social media followers. The only way small developers who are doing things the right way can survive is if people like you continue to support them.

]]>
Transcript Tag Support 2025-02-07T10:00:00-05:00 2025-02-07T10:41:31-05:00 Dustin Bluck https://castro.fm/blog/transcript-tag-release <![CDATA[

Today we’re releasing our initial support for the podcast:transcript tag. This is not generating transcripts but rather displaying the ones that are already in the feed, consistent with our goal of delivering the best UX possible while staying true to the creator’s content.

What we’re shipping

We have full support for the four formats outlined in the podcast-namespace. So any standard JSON, SRT, or VTT file will "just work". See example file here corresponding to the screenshot above.

HTML support is a bit trickier but we've got it working well, even for podcasts with somewhat non-standard timestamps.

A podcast with 5 different transcript types!
A podcast with only HTML transcripts

In addition, we are supporting XML transcripts like this one, and we're even rendering various plain text formats including this and this.

These are all being parsed on Castro’s backend and normalized for the client, offering a consistent user experience regardless of transcript origin. If you see a podcast that doesn’t seem to be rendering correctly, please email it to us. If you’re providing transcripts in a feed, using VTT or SRT is probably your best bet for the most consistent experience. If you’re not seeing a transcript in Castro that you expect, please email us and we’ll take a look.

Why aren’t we autogenerating?

I’m happy about what we’re releasing because it’s really important to me to show the podcast experience that the feed owner is trying to put out into the world. But there are drawbacks. For now, we can’t easily track the words to the audio since most transcripts do not include ads. Dynamic ad insertion in particular makes this difficult.

We could provide a more engaging UX here if we just generated our own transcripts. That’s the easiest way to solve the audio-word sync issue. Of course, there are cost concerns with that approach. In addition to cost, every layer of the stack adding their own transcripts doesn’t make a lot of sense to me.

  • Podcast creators use their own tools for recording, editing, etc. Riverside, Zencastr, etc offer transcriptions.
  • Podcast hosting companies have their own services and offerings. Again, transistor.fm and buzzsprout have transcription options.
  • Operating systems can and indeed already do offer transcription of audio (Live Captions is very good on both Android and iOS)
  • Do we also need the podcast app to generate another layer of transcripts on top of that?

Aside from this redundancy, transcripts generated on the client, even really, really good ones, cannot possibly know the creator’s intent. I see this issue repeatedly when I try offerings from other services. If a word is garbled or I can’t hear it correctly, that’s when I want to check the transcript. But in that situation, generated transcripts or live captions don’t have any better idea of what was said than I do and indeed they are often worse.

To me it’s far better to have the podcast provider tell you what they actually said. Of course, many podcasts are still going to use AI to generate their transcript, like Bloomberg and others are doing now, but at least they have the ability to correct mistakes on their end with this approach. This seems far superior to Castro compounding the problem by generating another layer of error-prone transcripts.

Future of the transcript tag

Adoption of this feature is not yet as widespread as I would like. Depending on how you slice the data, maybe 5-10% of new podcast episodes from popular feeds have transcripts. But that’s more than enough to be useful, and several large podcasts with extensive back catalogs have transcripts on every episode. However, there are two things that make me confident this number will grow and this is worth investing in:

  • There are numerous options for transcript generation at every level now. It’s very low effort to tack on basic transcript support in your podcast recording software or from your hosting company.
  • Industry adoption is widespread. Apple Podcasts shows the transcript tag when it is present. Pocket Casts, AntennaPod, and other apps also support this tag. Most major podcast hosting services support the tag.

Having access to the transcript is actually really handy. I was shocked how much I missed them once I spent a few days listening to shows with good transcripts. As more people experience this and more services make it easier to bolt transcripts on, I think we'll see transcript adoption continue to grow.

Making it better

This is only an initial release. We’re already working on improvements such as searching within the transcript, and we still want to pursue syncing audio to the text. We are exploring the best way to handle that, but I’m also open to ideas or discussion. If you are a podcast hosting company or creator who cares about transcripts and wants to help us provide a better experience to your listeners, please email [email protected] to discuss.

]]>
Holiday Update 2024-12-24T09:00:00-05:00 2024-12-24T08:59:02-05:00 Dustin Bluck https://castro.fm/blog/holiday-update <![CDATA[

Skip Outros / Holiday Icons

The big feature in this release is the ability to skip outros. This is probably our most requested feature so I’m thrilled we can finally deliver it. Merry Christmas!

Castro Red

In addition, for your holiday cheer, Castro has not one but two holiday icon choices for you. We’re also making a few other icons free for everyone, and we’ve revamped the splash screens to be a bit more modern and match the newer icon style.

Account Profiles

Another important inclusion in this release won’t be noticeable yet, but we’re rolling out account profiles under the hood. This feature, initially started by the former development team in 2021, enables sharing the same profile across multiple devices. It’s the largest change needed for device sync. It’s still locked to a single device while we iron out any bugs, but we’ll be building out syncing features soon. If you have any issues with 2024.12 related to your account or Castro Plus, this is the likely culprit. Please email support and we’ll help get it resolved.

Storage Page

We’ve gotten feedback that Castro can build up disk usage over time. To help track down bugs and better communicate what we’re doing with your disk space, we’ve released a page for visualizing all files stored within the app (accessed via Settings -> User Data -> Storage). We’ll continue to add more info here as needed. If you feel Castro is taking up too much space, take a screenshot and pass it along in a support email. We can help explain what’s going on, and it also gives us a good starting place for finding and fixing any related bugs.

iOS 15 Fixes

We haven’t done a great job of testing our latest features on small screens and older devices. This release includes a number of bug fixes that should improve the experience on the iPod Touch and similar devices. Now you can see the full episode information and won’t experience broken navigation bars, among other issues. We’re not abandoning iOS 15 any time soon.

Outage

Apologies for the brief outage December 17th. Our cloud provider shut down our database server for emergency maintenance while multiple team members were traveling. Typically we’d put the site in maintenance mode or swap in the standby for a longer outage, but by the time I got in front of a computer the server was already back up. Certain features of Castro were broken for about an hour. Sorry about that!

More to come soon. Happy holidays. Thanks for using Castro!

]]>
Making Castro’s Feeds Update Faster the Lazy Way 2024-10-20T00:00:00-04:00 2024-10-20T23:37:48-04:00 Dustin Bluck https://castro.fm/blog/making-castros-feeds-update-faster-the-lazy-way <![CDATA[

This is more technical than my previous posts. In terms of Castro itself, the app is doing great. Last week we again hit new user highs for 2024, presumably spurred by the pumpkin icon. The app is how users experience Castro, and it needs the most improvement, so that takes up 90% of our time, effort, and attention. We'll have more on the client soon, but that other 10% is also very important and is our subject here.

The Problem

You can broadly think of Castro’s backend in two parts: the endpoints Castro interacts with when you use the app and the workers we use to update your podcast feeds. The workers actually do many things, but ~99% of their clock cycles are spent checking podcast feeds and updating them. This is not directly user facing, so in theory it's not a prime candidate for optimization. Most update jobs don't change anything at all, and when a feed does change, a few hundred milliseconds to update our database is hardly noticeable to the average user.

Time before optimization

But in aggregate these jobs add up, and I've noticed as we've tacked on various new checks and features to the worker jobs, their execution time has crept upward from ~1 second to ~1.3-1.4 seconds. Given my knowledge of what’s actually happening even 1 second seems a bit too long. I want to improve this number, but I don’t want to spend much time on it, because the marginal worker is pretty cheap and we have many other things to focus on.

So what are these jobs doing and how can we make it better?

The Setup

I inherited the system more than I designed it but the Castro backend is fairly typical per my understanding. We run a Ruby on Rails app with a Postgres database, and we use Sidekiq to update our podcast feeds. There are many types of workers, but a podcast update job does 3 things:

  1. Lookup some info about a podcast in the database
  2. Query the podcast URL
  3. Most of the time, this is it, we’re done. Other times we have to record some data about the run, and less often we have to make some database writes for new episodes or updated podcast metadata.1

So it’s a lazy afternoon and I decide I’m going to make this better. But I’m not going to refactor anything or make any large changes. Instead I’m going to give myself an hour or two, poke at things a little bit, and abandon my efforts if they prove fruitless.

I don’t know ruby2 or rails well. I’m much more comfortable with statically typed languages with meaningful function names, but I worked on performance optimizations for a living at one time so I have some relevant experience. The biggest thing I've learned working on performance is that 80% of the gain is going to come from <20% of the effort. For the vast majority of software systems, open source or otherwise, nobody is monitoring what every line of code in production is doing, so just looking carefully at things with fresh eyes will usually yield something that can be improved.

Debugging

The first thing I need to do is google how to profile ruby in production. Rbspy quickly comes up as the best tool for the job, and indeed it proves incredibly helpful. As I said, almost all the worker time is spent updating feeds, and we're doing things the lazy way. So I don't bother isolating a specific job or any other setup. I just run rbspy for a few minutes in production to see what those workers are actually doing:

Initial Flame

Roughly speaking we can separate the resulting graph into four distinct parts:

  • 18% sidekiq overhead / redis calls (far left)
  • 8% rails / active record overhead
  • 49% network request (large block in center)
  • 25% parsing feeds, database updates (bottom right)

On the surface I’m not sure this is bad. I didn’t have strong priors. Redis overhead seems high, so we'll check that in a moment. Most of our network requests are very quick, but it's also the thing we're doing the most and all requests are to random third parties. 49% could be reasonable, let's check what's actually happening in there.

Network Request
  • ~18% in the request block itself (well yeah that checks out, gotta start the request and wait)
  • ~10% here (shrug, idk, seems legit)
  • ~13% in a function called set_default_paths. Hmmm. What is that doing?

If the reader clicked that last link they know just as much as I do, but from a cursory glance at the code it seems like it's just setting up the trust store for the request. Mind you this is 13% of all Castro's worker time. I took several traces to ensure it was representative (we are trying to be lazy but not stupid, going down a rabbithole based on an outlier would waste even more time).

Trust stores should generally be fine to reuse on a request, at least for our purposes. I guess what is happening is every part of the network stack is being torn down and reconstructed every time. If you were just making an occasional network request, it might not matter very much. Since we’re doing this 10s of millions of times per day, that setup is adding up. From the perspective of a client engineer, this is an unexpected source of performance issues. When writing a client side http library, reusing heavy files that don’t change would be an obvious thing to do. (e.g. Here is OkHttp setting it up once for the whole client.)

Improving the network request

We can do better.

I could set up a trust store in advance and reuse it. Net::Http looks like it’s checking for an existing file in that line of code, so there must be a config option. But do I really want to be creating trust stores? I do not want to start looking at OpenSSL API documentation. Think lazier.

Wait. Shouldn’t the networking stack just handle this? Maybe there's a simple way to turn on resource pooling. Luckily others have done actual work on this topic so we can just breeze through some blog posts. We stand on the shoulders of giants and late 2017 WeWork was extremely helpful so I hope everything went well for them the next couple years.

Anyway the upshot after thinking about this:

  • This is reasonably well known and documented behavior, it's just really not what we want in this case
  • The persistent gem would probably solve our problem. Even though "persistent" refers to reusing connections to the same server, which is very much not what we want, presumably they're also pooling resources better.
  • Swapping out the underlying http client sounds like a scary change, but we are lucky that all the worker code is written against Faraday and it actually is a fairly small implementation detail
  • If we have to swap out the http client anyway... the more I read about this http stack the less l like it.
  • Maybe we just take WeWork's advice above and use Typhoeus.

I add Typhoeus to the gem file and it’s just one line of code to swap out the adapter.

Faraday.new 
 ...
 builder.adapter Faraday.default_adapter
end

Becomes

Faraday.new 
 ...
 builder.adapter :typhoeus
end

I'm not kidding that was the whole change. The tests pass. After deploying to a test worker and making sure everything works, I give it a production workload for an hour. While it’s running, I look into a few small issues in the flamegraph and add some better Redis connection pooling to cut down on some of that initial 18% above. I also disable the Typheous http cache (add cache: false to the above code snippet) as I notice cache setup is showing up on new traces, and we have custom cache handling outside the http layer anyway. Test everything more, deploy all this to production and let it sit overnight.

That’s maybe 2 hours of work. Large improvement, we’re back to ~1 second, which was my whole goal anyway. I can go back to working on Castro’s client. Mission Accomplished.

Or is it?

The next day I just can’t resist taking a few more production traces.

Second Flame Graph


The network request has gone from 49% of our time to 19%. But I’m pretty shocked to find out that with the improved networking speed, now 20% of worker time is spent in the active record connection pool. That can’t be right, what is happening? Taking more traces reveals this was actually an outlier on the lower side, most traces are spending 25-30% of all time waiting on an active record connection.

I verify we're following the advice here. We have a very standard Sidekiq setup, with 8 threads per worker. and each of those threads should have a dedicated connection. What are these threads all waiting on? I could add more connections to the pool, but why would I need to? I feel like I'm missing something more fundamental.

I'm basically treating ActiveRecord as a black box, but of course it isn't. The right thing to do might be to read more blog posts, crawl through github issues, read the active record source code, and figure out why a connection wouldn't be freed. (Perhaps if you've worked with AR, you've already guessed the solution.) But let's just try a couple things first.

Maybe ActiveRecord is not very good at closing the connection after a job runs? What if we try to clear them proactively. Google for the right API (clear_all_active_connections!), make sure it only affects the current thread, and add it after each run.

code block 1

Run that on some representative data on a test worker.

After closing connections

Nope. If anything this is worse. Let's take a look at the job code above. Where would active record be holding an unused connection? ... Ah, when we're querying the 3rd party podcast server, we don't need to do anything with our database, and we don't know how long that will take. Seems like we might want to release the connection before querying. Acquiring a new connection afterward will have overhead but it's not going to outweigh 20-30% of all server time. (I asked ChatGPT if that's a good idea and it said no, but this still seems like a good idea to me.)

Idea #2

We try this:

code block 2

Run some tests locally and deploy to a worker for testing. I don’t even need a flamegraph to know this was a good change, as I notice the number of jobs we’re completing is significantly higher.

Final Trace
Final Time


Jackpot. Acquiring a connection effectively disappears from the trace. (The impact was much larger than I’d anticipated. I should've tried optimizing this sooner.) It seems like the issue is ActiveRecord is not always reusing the same connection within a thread and it doesn’t free them very quickly, so releasing it proactively makes a huge difference. Of course, I am not the only person to notice this.

We’ve just cut Castro’s entire backend workload in half in two sessions of debugging and what amounts to two lines of code. Further tweaking got the average down to ~.50 seconds. In theory we can update every feed much more frequently, and indeed we're already doing that. You might have noticed over the past week or two. We freed up so much worker capacity that we can't use it all yet, as I’m not sure the rest of the system could handle all the load, meaning we'll run less workers going forward and save server costs as well.

Can we do better?

  • Almost certainly. I never even looked at the feed parsing or database write portions of the graph. Naively, those should take the most time, and I’m fairly sure there would some low hanging fruit if we went looking for it.
  • Not all feeds or feed jobs are the same. If you dig into this data a little bit in non lazy way, we have only improved the easy case. But most jobs are quick and freeing up the fast case just overwhelms everything else.
  • Truly optimizing this would require isolating certain types of jobs and really digging into what individual runs are doing. We have enough throughput now that the juice isn’t worth the squeeze compared to everything else we have to do, but rest assured we'll continue improving this as needed.

Focus on Impact

Feed Cycle Times

The graphic is less popular feeds that have only a few subscribers, so it's the worst case scenario. (The bump was slowing things down to ensure nothing broke before we ramped things up in production.)

Historically, there have been complaints about Castro feeds sometimes falling days or hours behind. This does not happen anymore3 and hasn't for a long time. Today, every active4 feed is updated on our server every 10-12 minutes, which is an improvement from ~20 minutes before last week. At peak times this number may slip a bit but honestly not much and we're getting better there as well.

Hope you enjoy the faster feed updates!

Notes
1. In the case that we did find a new episode, a different job sends notifications to the user, so we don't have to worry about that here.
2. Ruby is an abomination of a programming language, but Rails makes it very easy to be productive.
3. Certainly some users might still experience issues. I'm not saying there aren't bugs. But those issues would generally be caused by app bugs or feed migrations, both of which we're improving (there may be a future post on feed migrations which is a problem we've mostly resolved).
4. Active here means a feed is alive and a user on Castro is subscribed to it.
]]>
Castro Fall Update 2024-09-25T12:22:00-04:00 2024-09-25T14:48:30-04:00 Dustin Bluck https://castro.fm/blog/castro-explore-ios-18 <![CDATA[

2024.9.1

This week's release (2024.9.1) fixed some small UI bugs and crash issues from the previous build. I promised more alternate icon choices on social media but those were pushed to the next build as we really need to get the crash fixes out. Don’t worry, the icons are still coming in the next couple of weeks.

Explore Release / iOS 18

I’m very excited about the new Explore tab in Castro. While it may not have felt like the most pressing need for daily users of Castro, it makes the app feel much more alive and dynamic. It’s important for widening our reach and for people coming to the app for the first time. It's also a marked improvement for non-US users as the old homepage was rarely updated in other locales. Surfacing not just podcasts but relevant episodes was really important to me, and I find the feature genuinely useful. It's night and day compared to the old discovery tab.

I’m also really happy we were able to support some of the new iOS 18 features the week they came out. This should be table stakes for an app like Castro. Users have noticed, and these changes are resonating:


With the recent releases, Castro is showing its best user numbers since our acquisition. The last few months have been Castro's first period of real sustained user growth since 2020, and that's without any marketing.


Castro
(A sleek, modern app icon that calls back to Castro's heritage while embodying new life and vibrancy)

On Shipping

Of course, change can be hard. Some people didn't love the new icon even though it is objectively glorious. There are also some discrepancies between the older and newer parts of the app as we modernize it. This is not ideal, and I've received a few complaints, so I want to say as firmly as I possibly can that I believe we're doing the right thing here.

Castro isn't a museum. We have to ship new features. The product has been stagnant for too long, and we can hardly say it's the best app for listening to podcasts on iOS when it still lacks basic features like device sync. We also can't sit in a room and rewrite the app from scratch for 6-12 months while the product remains stagnant. Even if we could, that's not a good way to ship software. Full rewrites rarely turn out to be a good idea.

So we're going to keep shipping, improving the UX and codebase as we go. The app will feel more cohesive as we progress, and in the meantime you'll get new features like episode artwork and explore. You'll also get the opportunity to give feedback on every incremental change, rather than everything new being dumped on you at once.

Future Plans

As I've shared in previous posts, a loose roadmap in no particular order with no timelines:

  • More alternate icons
  • Skip outros
  • Podcast summary rewrite to align with episode details
  • Performance improvements
  • Device sync for your subscribed feeds, then full queue sync / playback position, then iPad app
  • Episode search
  • New watch app
  • Support for podcast:transcript in RSS feeds

The last month or so has added a lot of new stuff, and these are easily the most substantial changes since acquisition. But we’re just getting started and have a ton more to ship. I believe the best is yet to come for Castro. Hope you'll stick around for the ride.

]]>
Summer Update 2024 2024-08-06T07:30:00-04:00 2024-09-25T12:32:00-04:00 Dustin Bluck https://castro.fm/blog/summer-update-2024 <![CDATA[

There is a new iOS app update available in the App Store right now. I feel this update requires some context so I want to give some information on what's going on with Castro and what's coming down the pike in the near future.

Castro Ads

Ads Portal

We've been testing a new self-serve ad portal on Castro for a month or so. It's been well-received, and there's a lot of appetite for advertising podcasts on Castro. The offering is basic for now, but we'll have more updates on this soon. If you have a podcast to promote, please give it a try.

New launch screen
New launch screen using modern iOS APIs

iOS 2024.7

Today's update is the last of what I would call the "foundational" releases we've been doing. The next few updates should be more product-focused and will start to include real changes to the UX. But for today we still have to eat our vegetables.

  • This release overhauls some core parts of the app. We're moving off of older, long-deprecated iOS APIs. Modernizing will allow us to iterate. We really couldn't justify building more features on top of things like PlayableContent, deprecated since iOS 14. You will notice a new launch screen as a result of this change, but the more significant near-term impact is going to be to CarPlay. Expect to see a fully rewritten CarPlay app with chapter support in time for iOS 18.
  • 2024.7 also includes a complete rewrite of the way the app deals with podcast artwork. The new image infrastructure will improve overall app performance and use disk space much more efficiently. We get recurring questions about disk space, and while podcast artwork isn't the only culprit, it is a big contributor. In terms of this update, you will notice a few improvements such as embedded artwork showing up in the mini player, widget images loading more consistently, higher-res artwork on the playback screen, and more.
  • Under the hood, this release makes the app aware of many core podcast feed elements that Castro is currently lacking or hiding. I may write a separate post on some of the backend changes but 2024.7 includes support for episode artwork, season/episode numbers, and various other podcast features that we've had on the backend for a bit but the client is not showing yet. They just need a bit of time to bake in the app before we show them in the product. I've attached an image below showing the missing details. The design is not necessarily what we will bring to Castro but it's important we add all this information to the UI.
Castro Today
Castro today
New launch screen
Android screenshot showing episode artwork, author, media size, and episode number, which we will be supporting soon.
  • In addition to the above, we've spent weeks debugging and testing playback to address a number of complaints we've gotten including failures to play certain episodes, CarPlay crashes, lost place, lost playback speed, etc. Unfortunately, there's not a single cause and no silver bullet, but we're currently testing several fixes, either with a feature flag in this current release or held back internally. We'll keep iterating on this but we're definitely moving toward a better place. Nothing in the app is more important than actually playing your podcasts so we really want that to be perfect.

Additional Notes

I want to note that the number of changes we're making in this release and the next one are likely going to result in some regressions. The first commit in Castro's repo is from November 17, 2014, so there are some dusty corners and we're going to have to break some eggs. Bear with us and hopefully the brave TestFlight users will continue to protect most of you. They've done a masterful job over the past two weeks.

Finally, a few people have reached out asking how they can help support Castro. We have a select group of loyal fans, and their retention numbers are very good, but we don't have a massive userbase outside of our premium users. If you want to help, the main thing we need is users and mindshare. Tell your friends to try Castro again. Write your favorite podcasts/tech bloggers and ask them to link to us on their websites and talk about us on their shows. Every marginal user makes the platform more viable and justifies putting more investment and resources into making the app better.

Thanks for reading and look out for more updates in the near future.

]]>
Announcing WebSub Support 2024-06-20T10:52:00-04:00 2024-06-20T10:53:11-04:00 Castro Team https://castro.fm/blog/announcing-websub-support <![CDATA[

Yesterday we began rolling out support for WebSub, which will allow more precise timing for your podcast updates. ~25% of the popular podcast feeds on our service have WebSub support already, and though the protocol has been around a long time it's receiving increasing support in the podcast ecosystem so we'll hopefully continue to see that number go up. Read more about WebSub here and here.


Castro users are already seeing the benefits of more timely notifications. More improvements to come soon!

]]>
State of the App -- May 2024 2024-05-13T15:13:00-04:00 2024-05-13T15:14:46-04:00 Dustin Bluck https://castro.fm/blog/state-of-the-app-may-2024 <![CDATA[

Just wanted to give a quick recap of what we've been up to with Castro since acquiring the company.

  • I was on an episode of The Changelog. If you have any questions about the acquisition, this will probably answer it.
  • I did a Reddit AMA for /r/podcasts.

We released our first few client updates, after Apple finally let us transfer the app to a new account. Primarily we have been focused on the major pain points that people were emailing us about:

  • Sideloading
  • Theming / Dark mode inconsistencies
  • Duplicate feeds, "not authorized" error messages, feeds not updating, etc.

We've also updated every major piece of the backend. New servers, newer versions of elasticsearch, redis, sidekiq, nginx, rails, etc. This stuff hadn't been touched in several years, and bringing it up to date will allow us to iterate much more quickly.

Speaking of the backend, popular podcasts on Castro always update within about 10 minutes right now, and all podcasts should update every 40 minutes at the most, with occasional dips during peak hours. We're bringing these numbers down over time and hope everything will eventually be within 15 minutes, with popular feeds being more like 2-5 minutes. We still have work to do, but in general the backend is way more reliable than it was a few months ago, and the issues people are seeing now are mostly client driven.

One big thing we haven't really addressed is playback issues (background crashes, place being lost, etc.) This is our current major focus. I was hoping we'd have an update out this week, but it's not trivial to untangle everything so we're going to take some extra time to get it right. We've even added a person to the team just to focus on this.

As I've communicated elsewhere, here is a loose roadmap of where we're going over the next few months.

  • CarPlay fixes / features
  • Accessibility fixes
  • Better recommendations / explore enhancements
  • Full watch app rewrite
  • Some UI / design tweaks to show more content and reduce wasted space
  • Device sync for your subscribed feeds
  • Episode search
  • Support for podcast:transcript and podcast:person tags in RSS feeds

Some smaller features people have been asking about that I can promise we are working on and will come out in the next couple of updates:

  • Chapters in Carplay
  • Large Interactive / Queue Widget
  • Skip outro options
  • Clickable timestamps in show notes
  • Better episode artwork support

Also, I can share our Android app incorporated Castro search and explore functionality in an update released this morning. It's a small but important step toward Castro for Android.

Finally, if you are a student or junior developer who lives in New York and is interested in podcast apps, I'd really like to add one more person to the team on a part-time basis. Please reach out.

Please email support with any questions.

Thanks for using Castro!

- Dustin

]]>
Pricing Updates 2024-04-11T07:02:00-04:00 2024-04-11T07:46:08-04:00 Castro Team https://castro.fm/blog/pricing-updates <![CDATA[

We wanted to give everyone a heads-up that we are making some updates to our subscription tiers and pricing.

TL;DR

  • We are lowering our standard prices.
  • We are discontinuing all discounts and C2 early supporter specials.
  • We have extended all current subscriptions (prior to this post going up).

New Pricing

  • Annual: $24.99
  • Monthly: $3.99
  • Quarterly: $9.99
  • Family (Annual): $39.99

This is a price drop for the vast majority of subscribers. We think these numbers and changes are fair, and they set us up well to offer Castro at a consistent price for the foreseeable future. This pricing is effective today. If you like the app and want to see it get better, now is a great time to subscribe.

Update: This will be €24.99 EUR as well, some international prices will go into effect Friday 4/12 due to an oversight.

Early Supporter Discount

The two-tiered pricing has caused a great deal of confusion. Castro "3" has been out since 2018, and it's no longer sensible for certain users to be paying a lower price.

If you are on an early supporter plan right now, it will of course be valid until it expires. Then it will renew at the new pricing. The actual subscription will remain in the App Store. It will just be equivalent in every way to the normal subscription.

Subscription Extensions

As a thank you for supporting us through this transition and to compensate for the changes above, all existing subscribers have received additional time on their subscription (monthly was extended 30 extra days, quarterly 45, annual 90).

Thanks for using Castro and email [email protected] with any questions or concerns.

]]>
A Fresh Start for Castro 2024-01-31T05:03:00-05:00 2024-01-31T05:03:33-05:00 Castro Team https://castro.fm/blog/castro-is-back <![CDATA[

We are excited to announce Castro has been purchased by Bluck Apps. Castro is a great app with a long history on iOS and many passionate fans, and it will continue to operate in its current form. This is a return to its independent roots. We won’t be making any drastic changes, like overhauling the UI to look more like TikTok. We’re not adding an AI chatbot. We’ll just keep running the podcast service you already love, with a few tweaks to modernize and keep things running smoothly.

We hear you

We know that over the past few months Castro has not communicated well. The new team's #1 priority will be keeping our users informed. Starting today, all support emails will be answered in a timely manner. Major changes will be broadcast widely, and we'll let you know if something is going on with the app. We’re also working our way through the thousands of messages we’ve received the last few months, so if you’ve already emailed us, don’t worry we’ll get to you. Feel free to ping the thread if we haven’t responded.

We can’t guarantee every single issue will be fixed ASAP, but we’ll at least let you know we’re working on it.

Who we are

Bluck Apps is an independently run app studio and consulting agency. We already have a podcast app on Android. Though the UX is somewhat different, both Aurelian and Castro are designed to give a delightful experience to people who *really* love podcasts and listen to many of them. This is a niche, and we intend to serve that niche. If you have over 100 podcast subscriptions and listen to them all semi-regularly, you are probably one of our people.

We are very committed to the open podcasting ecosystem and taking over such a well-designed independent app is very cool for us.

What happens now

In the short term, nothing changes. If you’re a paying subscriber or free user, just keep using the app. If you’re experiencing issues or have feedback, email us at [email protected].

In the coming weeks and months, we’ll be making some changes under the hood to make the backend more stable and make sure new episodes sync more quickly. Once things are stabilized and the transition is complete, we’ll be turning our attention toward new features, such as syncing across devices. Feel free to email us if you have suggestions on what we should be working on first.

Aurelian Audio

We’ll be moving Aurelian Audio under the Castro umbrella, letting it benefit from Castro’s superior search and backend capabilities. If you’re a current user of the Aurelian app, it’s only going to get better. You may want to sign up for premium now because it won’t be $1 / year forever.

I really hate subscription apps. Will you stop charging a subscription?

Sorry, there’s no other path forward for Castro, which has ongoing maintenance and server costs. Think of it like Patreon or Substack. Your monthly fee is supporting the creation of the product. However, we’re not buying Castro to milk it for revenue. The price of Castro is not going up at this time.

Keep in touch

We are committed to keeping our users informed going forward, and we’re making an effort to reach people wherever they are.

  • Sign up for our newsletter at the bottom of this blog post or anywhere on our website. We’ll start sending out updates over the next few weeks.
  • This blog is our most direct way to communicate with you, and there is an RSS feed here.
  • X / Twitter
  • Mastodon
  • Instagram

New country

Castro is now owned and operated in Brooklyn, New York, rather than Canada. We’ll be updating some parts of the website and relevant legal agreements to reflect this, but it will have no impact on the way the app functions.

Anything else

We’d like to thank the founders of Castro for building a great app and the seller for being extremely professional and courteous. We’re going to take good care of the app they made. If you have any other questions, please get in touch.

]]>