As a celebration of one year with Castro, we are opening a merch store. We tried a few stores and this was the best combination of quality products and a good selection. As always, all proceeds directly fund development of the app. Do not feel pressured to buy anything, I’d much rather you buy an annual subscription of Castro for a friend than a T-shirt, but we do get occasional requests for merch or about other ways to support us, so it’s there if you want it. Email support if you have any questions or issues with the site.
Overall I’m happy with Castro’s improvements over the past year, but we need to move iOS product out the door faster. That’s my number one focus and hopefully you will start to see those efforts bear fruit soon. Until then, I thought I’d share some of the work we’ve done that isn’t as easy for you to see but does matter a great deal.
Some might have expected a reflection on the state of the podcast industry in 2025 or Castro's place in the universe, but we have work to do. This is a post about results and what your subscription dollars are buying. Faster feed updates and a more reliable, performant podcast app.
A year ago, the app crashed 1 out of every 58 sessions. Today it’s more like 1 out of every 750. We still want to improve this but the number is quite reasonable, going from the worst performing quartile according to iOS benchmarks to the second best. I obviously won’t be satisfied until we’re in the top bracket. If you expand out the chart above, it’s really the best Castro’s crash rate has ever been aside from a brief period in 2018.
If you’re wondering what we actually fixed here, there was no magic bullet.
Also I’m obligated to say, as someone with a background in mobile app reliability, sessions/crash is not really the best way to parse crash data, but it’s the number Apple gives us and the fairest comparison across apps.
It was a touchy subject in the past, but there have been no sustained service outages since we took ownership. There was really only one unplanned outage of any consequence, when our cloud provider spontaneously rebooted our database server and no team members were around. I believe it lasted an hour or so before everything came back. Many apologies for that.
Nobody running a web service can tell you it will never go down, but I feel good about our track record. We will continue to do our best to avoid issues and communicate proactively when they occur. I feel especially good about this in light of the number of changes we have made to the infrastructure, enumerated below. It’s an issue I have taken incredibly seriously and I really intend for Castro to be associated with excellent service and reliability, not outages.
I've compiled a non-exhaustive list of features and upgrades added to Castro’s backend, as my focus really has been on creating a sustainable infrastructure that will be around for a long time and allow us to iterate quickly and safely.
One thing to call out if you’re going to skip the list below, is that this weekend we once again dialed up our feed updates. If Castro’s feed updates aren’t already the fastest in the industry, they will be soon. Also, if you miss an episode on Castro today, it’s almost certain that the backend updated just fine but the event processing failed on the client. This is an issue I have been very focused on and expect to eliminate entirely in the next iOS release.
All this stuff adds up. Event queries, our most common endpoint by far, had a mean-90 of 40-60ms in the past, right now that number is < 15ms and literally getting better on a weekly basis. Many of our endpoints have seen even more substantial improvements.
One more thing to call out is our email support is really good. An actual person with deep knowledge of the app reads every email, helping where we can and telling you when we cannot. We try to answer everything, but if we haven’t gotten back to you ping us again in a few days and we’ll get there. We also do our best to follow up again once we’ve fixed something, even if it’s months down the road.
Thanks for reading and for using Castro. If you like what we’re doing, please buy a subscription and tell your friends or social media followers. The only way small developers who are doing things the right way can survive is if people like you continue to support them.
]]>Today we’re releasing our initial support for the podcast:transcript
tag. This is not generating transcripts but rather displaying the ones that are already in the feed, consistent with our goal of delivering the best UX possible while staying true to the creator’s content.
![]() | ![]() |
We have full support for the four formats outlined in the podcast-namespace. So any standard JSON, SRT, or VTT file will "just work". See example file here corresponding to the screenshot above.
HTML support is a bit trickier but we've got it working well, even for podcasts with somewhat non-standard timestamps.
![]() |
![]() |
In addition, we are supporting XML transcripts like this one, and we're even rendering various plain text formats including this and this.
These are all being parsed on Castro’s backend and normalized for the client, offering a consistent user experience regardless of transcript origin. If you see a podcast that doesn’t seem to be rendering correctly, please email it to us. If you’re providing transcripts in a feed, using VTT or SRT is probably your best bet for the most consistent experience. If you’re not seeing a transcript in Castro that you expect, please email us and we’ll take a look.
I’m happy about what we’re releasing because it’s really important to me to show the podcast experience that the feed owner is trying to put out into the world. But there are drawbacks. For now, we can’t easily track the words to the audio since most transcripts do not include ads. Dynamic ad insertion in particular makes this difficult.
We could provide a more engaging UX here if we just generated our own transcripts. That’s the easiest way to solve the audio-word sync issue. Of course, there are cost concerns with that approach. In addition to cost, every layer of the stack adding their own transcripts doesn’t make a lot of sense to me.
Aside from this redundancy, transcripts generated on the client, even really, really good ones, cannot possibly know the creator’s intent. I see this issue repeatedly when I try offerings from other services. If a word is garbled or I can’t hear it correctly, that’s when I want to check the transcript. But in that situation, generated transcripts or live captions don’t have any better idea of what was said than I do and indeed they are often worse.
To me it’s far better to have the podcast provider tell you what they actually said. Of course, many podcasts are still going to use AI to generate their transcript, like Bloomberg and others are doing now, but at least they have the ability to correct mistakes on their end with this approach. This seems far superior to Castro compounding the problem by generating another layer of error-prone transcripts.
Adoption of this feature is not yet as widespread as I would like. Depending on how you slice the data, maybe 5-10% of new podcast episodes from popular feeds have transcripts. But that’s more than enough to be useful, and several large podcasts with extensive back catalogs have transcripts on every episode. However, there are two things that make me confident this number will grow and this is worth investing in:
Having access to the transcript is actually really handy. I was shocked how much I missed them once I spent a few days listening to shows with good transcripts. As more people experience this and more services make it easier to bolt transcripts on, I think we'll see transcript adoption continue to grow.
This is only an initial release. We’re already working on improvements such as searching within the transcript, and we still want to pursue syncing audio to the text. We are exploring the best way to handle that, but I’m also open to ideas or discussion. If you are a podcast hosting company or creator who cares about transcripts and wants to help us provide a better experience to your listeners, please email [email protected] to discuss.
]]>The big feature in this release is the ability to skip outros. This is probably our most requested feature so I’m thrilled we can finally deliver it. Merry Christmas!
In addition, for your holiday cheer, Castro has not one but two holiday icon choices for you. We’re also making a few other icons free for everyone, and we’ve revamped the splash screens to be a bit more modern and match the newer icon style.
Another important inclusion in this release won’t be noticeable yet, but we’re rolling out account profiles under the hood. This feature, initially started by the former development team in 2021, enables sharing the same profile across multiple devices. It’s the largest change needed for device sync. It’s still locked to a single device while we iron out any bugs, but we’ll be building out syncing features soon. If you have any issues with 2024.12 related to your account or Castro Plus, this is the likely culprit. Please email support and we’ll help get it resolved.
We’ve gotten feedback that Castro can build up disk usage over time. To help track down bugs and better communicate what we’re doing with your disk space, we’ve released a page for visualizing all files stored within the app (accessed via Settings -> User Data -> Storage). We’ll continue to add more info here as needed. If you feel Castro is taking up too much space, take a screenshot and pass it along in a support email. We can help explain what’s going on, and it also gives us a good starting place for finding and fixing any related bugs.
We haven’t done a great job of testing our latest features on small screens and older devices. This release includes a number of bug fixes that should improve the experience on the iPod Touch and similar devices. Now you can see the full episode information and won’t experience broken navigation bars, among other issues. We’re not abandoning iOS 15 any time soon.
Apologies for the brief outage December 17th. Our cloud provider shut down our database server for emergency maintenance while multiple team members were traveling. Typically we’d put the site in maintenance mode or swap in the standby for a longer outage, but by the time I got in front of a computer the server was already back up. Certain features of Castro were broken for about an hour. Sorry about that!
More to come soon. Happy holidays. Thanks for using Castro!
]]>This is more technical than my previous posts. In terms of Castro itself, the app is doing great. Last week we again hit new user highs for 2024, presumably spurred by the pumpkin icon. The app is how users experience Castro, and it needs the most improvement, so that takes up 90% of our time, effort, and attention. We'll have more on the client soon, but that other 10% is also very important and is our subject here.
You can broadly think of Castro’s backend in two parts: the endpoints Castro interacts with when you use the app and the workers we use to update your podcast feeds. The workers actually do many things, but ~99% of their clock cycles are spent checking podcast feeds and updating them. This is not directly user facing, so in theory it's not a prime candidate for optimization. Most update jobs don't change anything at all, and when a feed does change, a few hundred milliseconds to update our database is hardly noticeable to the average user.
But in aggregate these jobs add up, and I've noticed as we've tacked on various new checks and features to the worker jobs, their execution time has crept upward from ~1 second to ~1.3-1.4 seconds. Given my knowledge of what’s actually happening even 1 second seems a bit too long. I want to improve this number, but I don’t want to spend much time on it, because the marginal worker is pretty cheap and we have many other things to focus on.
So what are these jobs doing and how can we make it better?
I inherited the system more than I designed it but the Castro backend is fairly typical per my understanding. We run a Ruby on Rails app with a Postgres database, and we use Sidekiq to update our podcast feeds. There are many types of workers, but a podcast update job does 3 things:
So it’s a lazy afternoon and I decide I’m going to make this better. But I’m not going to refactor anything or make any large changes. Instead I’m going to give myself an hour or two, poke at things a little bit, and abandon my efforts if they prove fruitless.
I don’t know ruby2 or rails well. I’m much more comfortable with statically typed languages with meaningful function names, but I worked on performance optimizations for a living at one time so I have some relevant experience. The biggest thing I've learned working on performance is that 80% of the gain is going to come from <20% of the effort. For the vast majority of software systems, open source or otherwise, nobody is monitoring what every line of code in production is doing, so just looking carefully at things with fresh eyes will usually yield something that can be improved.
The first thing I need to do is google how to profile ruby in production. Rbspy quickly comes up as the best tool for the job, and indeed it proves incredibly helpful. As I said, almost all the worker time is spent updating feeds, and we're doing things the lazy way. So I don't bother isolating a specific job or any other setup. I just run rbspy for a few minutes in production to see what those workers are actually doing:
Roughly speaking we can separate the resulting graph into four distinct parts:
On the surface I’m not sure this is bad. I didn’t have strong priors. Redis overhead seems high, so we'll check that in a moment. Most of our network requests are very quick, but it's also the thing we're doing the most and all requests are to random third parties. 49% could be reasonable, let's check what's actually happening in there.
If the reader clicked that last link they know just as much as I do, but from a cursory glance at the code it seems like it's just setting up the trust store for the request. Mind you this is 13% of all Castro's worker time. I took several traces to ensure it was representative (we are trying to be lazy but not stupid, going down a rabbithole based on an outlier would waste even more time).
Trust stores should generally be fine to reuse on a request, at least for our purposes. I guess what is happening is every part of the network stack is being torn down and reconstructed every time. If you were just making an occasional network request, it might not matter very much. Since we’re doing this 10s of millions of times per day, that setup is adding up. From the perspective of a client engineer, this is an unexpected source of performance issues. When writing a client side http library, reusing heavy files that don’t change would be an obvious thing to do. (e.g. Here is OkHttp setting it up once for the whole client.)
We can do better.
I could set up a trust store in advance and reuse it. Net::Http looks like it’s checking for an existing file in that line of code, so there must be a config option. But do I really want to be creating trust stores? I do not want to start looking at OpenSSL API documentation. Think lazier.
Wait. Shouldn’t the networking stack just handle this? Maybe there's a simple way to turn on resource pooling. Luckily others have done actual work on this topic so we can just breeze through some blog posts. We stand on the shoulders of giants and late 2017 WeWork was extremely helpful so I hope everything went well for them the next couple years.
Anyway the upshot after thinking about this:
I add Typhoeus to the gem file and it’s just one line of code to swap out the adapter.
Faraday.new ... builder.adapter Faraday.default_adapter end
Becomes
Faraday.new ... builder.adapter :typhoeus end
I'm not kidding that was the whole change. The tests pass. After deploying to a test worker and making sure everything works, I give it a production workload for an hour. While it’s running, I look into a few small issues in the flamegraph and add some better Redis connection pooling to cut down on some of that initial 18% above. I also disable the Typheous http cache (add cache: false to the above code snippet) as I notice cache setup is showing up on new traces, and we have custom cache handling outside the http layer anyway. Test everything more, deploy all this to production and let it sit overnight.
That’s maybe 2 hours of work. Large improvement, we’re back to ~1 second, which was my whole goal anyway. I can go back to working on Castro’s client. Mission Accomplished.
The next day I just can’t resist taking a few more production traces.
The network request has gone from 49% of our time to 19%. But I’m pretty shocked to find out that with the improved networking speed, now 20% of worker time is spent in the active record connection pool. That can’t be right, what is happening? Taking more traces reveals this was actually an outlier on the lower side, most traces are spending 25-30% of all time waiting on an active record connection.
I verify we're following the advice here. We have a very standard Sidekiq setup, with 8 threads per worker. and each of those threads should have a dedicated connection. What are these threads all waiting on? I could add more connections to the pool, but why would I need to? I feel like I'm missing something more fundamental.
I'm basically treating ActiveRecord as a black box, but of course it isn't. The right thing to do might be to read more blog posts, crawl through github issues, read the active record source code, and figure out why a connection wouldn't be freed. (Perhaps if you've worked with AR, you've already guessed the solution.) But let's just try a couple things first.
Maybe ActiveRecord is not very good at closing the connection after a job runs? What if we try to clear them proactively. Google for the right API (clear_all_active_connections!), make sure it only affects the current thread, and add it after each run.
Run that on some representative data on a test worker.
Nope. If anything this is worse. Let's take a look at the job code above. Where would active record be holding an unused connection? ... Ah, when we're querying the 3rd party podcast server, we don't need to do anything with our database, and we don't know how long that will take. Seems like we might want to release the connection before querying. Acquiring a new connection afterward will have overhead but it's not going to outweigh 20-30% of all server time. (I asked ChatGPT if that's a good idea and it said no, but this still seems like a good idea to me.)
We try this:
Run some tests locally and deploy to a worker for testing. I don’t even need a flamegraph to know this was a good change, as I notice the number of jobs we’re completing is significantly higher.
Jackpot. Acquiring a connection effectively disappears from the trace. (The impact was much larger than I’d anticipated. I should've tried optimizing this sooner.) It seems like the issue is ActiveRecord is not always reusing the same connection within a thread and it doesn’t free them very quickly, so releasing it proactively makes a huge difference. Of course, I am not the only person to notice this.
We’ve just cut Castro’s entire backend workload in half in two sessions of debugging and what amounts to two lines of code. Further tweaking got the average down to ~.50 seconds. In theory we can update every feed much more frequently, and indeed we're already doing that. You might have noticed over the past week or two. We freed up so much worker capacity that we can't use it all yet, as I’m not sure the rest of the system could handle all the load, meaning we'll run less workers going forward and save server costs as well.
Can we do better?
The graphic is less popular feeds that have only a few subscribers, so it's the worst case scenario. (The bump was slowing things down to ensure nothing broke before we ramped things up in production.)
Historically, there have been complaints about Castro feeds sometimes falling days or hours behind. This does not happen anymore3 and hasn't for a long time. Today, every active4 feed is updated on our server every 10-12 minutes, which is an improvement from ~20 minutes before last week. At peak times this number may slip a bit but honestly not much and we're getting better there as well.
Hope you enjoy the faster feed updates!
This week's release (2024.9.1) fixed some small UI bugs and crash issues from the previous build. I promised more alternate icon choices on social media but those were pushed to the next build as we really need to get the crash fixes out. Don’t worry, the icons are still coming in the next couple of weeks.
I’m very excited about the new Explore tab in Castro. While it may not have felt like the most pressing need for daily users of Castro, it makes the app feel much more alive and dynamic. It’s important for widening our reach and for people coming to the app for the first time. It's also a marked improvement for non-US users as the old homepage was rarely updated in other locales. Surfacing not just podcasts but relevant episodes was really important to me, and I find the feature genuinely useful. It's night and day compared to the old discovery tab.
I’m also really happy we were able to support some of the new iOS 18 features the week they came out. This should be table stakes for an app like Castro. Users have noticed, and these changes are resonating:
With the recent releases, Castro is showing its best user numbers since our acquisition. The last few months have been Castro's first period of real sustained user growth since 2020, and that's without any marketing.
Of course, change can be hard. Some people didn't love the new icon even though it is objectively glorious. There are also some discrepancies between the older and newer parts of the app as we modernize it. This is not ideal, and I've received a few complaints, so I want to say as firmly as I possibly can that I believe we're doing the right thing here.
Castro isn't a museum. We have to ship new features. The product has been stagnant for too long, and we can hardly say it's the best app for listening to podcasts on iOS when it still lacks basic features like device sync. We also can't sit in a room and rewrite the app from scratch for 6-12 months while the product remains stagnant. Even if we could, that's not a good way to ship software. Full rewrites rarely turn out to be a good idea.
So we're going to keep shipping, improving the UX and codebase as we go. The app will feel more cohesive as we progress, and in the meantime you'll get new features like episode artwork and explore. You'll also get the opportunity to give feedback on every incremental change, rather than everything new being dumped on you at once.
As I've shared in previous posts, a loose roadmap in no particular order with no timelines:
The last month or so has added a lot of new stuff, and these are easily the most substantial changes since acquisition. But we’re just getting started and have a ton more to ship. I believe the best is yet to come for Castro. Hope you'll stick around for the ride.
]]>There is a new iOS app update available in the App Store right now. I feel this update requires some context so I want to give some information on what's going on with Castro and what's coming down the pike in the near future.
We've been testing a new self-serve ad portal on Castro for a month or so. It's been well-received, and there's a lot of appetite for advertising podcasts on Castro. The offering is basic for now, but we'll have more updates on this soon. If you have a podcast to promote, please give it a try.
Today's update is the last of what I would call the "foundational" releases we've been doing. The next few updates should be more product-focused and will start to include real changes to the UX. But for today we still have to eat our vegetables.
![]() |
I want to note that the number of changes we're making in this release and the next one are likely going to result in some regressions. The first commit in Castro's repo is from November 17, 2014, so there are some dusty corners and we're going to have to break some eggs. Bear with us and hopefully the brave TestFlight users will continue to protect most of you. They've done a masterful job over the past two weeks.
Finally, a few people have reached out asking how they can help support Castro. We have a select group of loyal fans, and their retention numbers are very good, but we don't have a massive userbase outside of our premium users. If you want to help, the main thing we need is users and mindshare. Tell your friends to try Castro again. Write your favorite podcasts/tech bloggers and ask them to link to us on their websites and talk about us on their shows. Every marginal user makes the platform more viable and justifies putting more investment and resources into making the app better.
Thanks for reading and look out for more updates in the near future.
]]>Yesterday we began rolling out support for WebSub, which will allow more precise timing for your podcast updates. ~25% of the popular podcast feeds on our service have WebSub support already, and though the protocol has been around a long time it's receiving increasing support in the podcast ecosystem so we'll hopefully continue to see that number go up. Read more about WebSub here and here.
Castro users are already seeing the benefits of more timely notifications. More improvements to come soon!
]]>Just wanted to give a quick recap of what we've been up to with Castro since acquiring the company.
We released our first few client updates, after Apple finally let us transfer the app to a new account. Primarily we have been focused on the major pain points that people were emailing us about:
We've also updated every major piece of the backend. New servers, newer versions of elasticsearch, redis, sidekiq, nginx, rails, etc. This stuff hadn't been touched in several years, and bringing it up to date will allow us to iterate much more quickly.
Speaking of the backend, popular podcasts on Castro always update within about 10 minutes right now, and all podcasts should update every 40 minutes at the most, with occasional dips during peak hours. We're bringing these numbers down over time and hope everything will eventually be within 15 minutes, with popular feeds being more like 2-5 minutes. We still have work to do, but in general the backend is way more reliable than it was a few months ago, and the issues people are seeing now are mostly client driven.
One big thing we haven't really addressed is playback issues (background crashes, place being lost, etc.) This is our current major focus. I was hoping we'd have an update out this week, but it's not trivial to untangle everything so we're going to take some extra time to get it right. We've even added a person to the team just to focus on this.
As I've communicated elsewhere, here is a loose roadmap of where we're going over the next few months.
Some smaller features people have been asking about that I can promise we are working on and will come out in the next couple of updates:
Also, I can share our Android app incorporated Castro search and explore functionality in an update released this morning. It's a small but important step toward Castro for Android.
Finally, if you are a student or junior developer who lives in New York and is interested in podcast apps, I'd really like to add one more person to the team on a part-time basis. Please reach out.
Please email support with any questions.
Thanks for using Castro!
- Dustin
]]>We wanted to give everyone a heads-up that we are making some updates to our subscription tiers and pricing.
This is a price drop for the vast majority of subscribers. We think these numbers and changes are fair, and they set us up well to offer Castro at a consistent price for the foreseeable future. This pricing is effective today. If you like the app and want to see it get better, now is a great time to subscribe.
Update: This will be €24.99 EUR as well, some international prices will go into effect Friday 4/12 due to an oversight.
The two-tiered pricing has caused a great deal of confusion. Castro "3" has been out since 2018, and it's no longer sensible for certain users to be paying a lower price.
If you are on an early supporter plan right now, it will of course be valid until it expires. Then it will renew at the new pricing. The actual subscription will remain in the App Store. It will just be equivalent in every way to the normal subscription.
As a thank you for supporting us through this transition and to compensate for the changes above, all existing subscribers have received additional time on their subscription (monthly was extended 30 extra days, quarterly 45, annual 90).
Thanks for using Castro and email [email protected] with any questions or concerns.
We are excited to announce Castro has been purchased by Bluck Apps. Castro is a great app with a long history on iOS and many passionate fans, and it will continue to operate in its current form. This is a return to its independent roots. We won’t be making any drastic changes, like overhauling the UI to look more like TikTok. We’re not adding an AI chatbot. We’ll just keep running the podcast service you already love, with a few tweaks to modernize and keep things running smoothly.
We know that over the past few months Castro has not communicated well. The new team's #1 priority will be keeping our users informed. Starting today, all support emails will be answered in a timely manner. Major changes will be broadcast widely, and we'll let you know if something is going on with the app. We’re also working our way through the thousands of messages we’ve received the last few months, so if you’ve already emailed us, don’t worry we’ll get to you. Feel free to ping the thread if we haven’t responded.
We can’t guarantee every single issue will be fixed ASAP, but we’ll at least let you know we’re working on it.
Bluck Apps is an independently run app studio and consulting agency. We already have a podcast app on Android. Though the UX is somewhat different, both Aurelian and Castro are designed to give a delightful experience to people who *really* love podcasts and listen to many of them. This is a niche, and we intend to serve that niche. If you have over 100 podcast subscriptions and listen to them all semi-regularly, you are probably one of our people.
We are very committed to the open podcasting ecosystem and taking over such a well-designed independent app is very cool for us.
In the short term, nothing changes. If you’re a paying subscriber or free user, just keep using the app. If you’re experiencing issues or have feedback, email us at [email protected].
In the coming weeks and months, we’ll be making some changes under the hood to make the backend more stable and make sure new episodes sync more quickly. Once things are stabilized and the transition is complete, we’ll be turning our attention toward new features, such as syncing across devices. Feel free to email us if you have suggestions on what we should be working on first.
We’ll be moving Aurelian Audio under the Castro umbrella, letting it benefit from Castro’s superior search and backend capabilities. If you’re a current user of the Aurelian app, it’s only going to get better. You may want to sign up for premium now because it won’t be $1 / year forever.
Sorry, there’s no other path forward for Castro, which has ongoing maintenance and server costs. Think of it like Patreon or Substack. Your monthly fee is supporting the creation of the product. However, we’re not buying Castro to milk it for revenue. The price of Castro is not going up at this time.
We are committed to keeping our users informed going forward, and we’re making an effort to reach people wherever they are.
Castro is now owned and operated in Brooklyn, New York, rather than Canada. We’ll be updating some parts of the website and relevant legal agreements to reflect this, but it will have no impact on the way the app functions.
We’d like to thank the founders of Castro for building a great app and the seller for being extremely professional and courteous. We’re going to take good care of the app they made. If you have any other questions, please get in touch.
]]>